

They can’t unless the parties agree that they can. The sneaky part is the “by continuing to use our services, you agree to the new terms” part, which is standard practice. You’d have to terminate your account before the new terms come into effect, then take them to court to make sure they didn’t keep your data around and use it to train their AI anyway because they “didn’t notice” that that particular content belonged to someone who didn’t accept the new terms.
Censorship is bad, but Facebook and X’s entire business models revolve around spreading content that is at once false and inflammatory, either just to create engagement or for more malicious purposes, and they reach a huge portion of the population directly, including children, teenagers, the mentally ill and other vulnerable populations. This requires a new understanding of accountability for spreading information.
I wouldn’t agree that it makes sense to hold a Mastodon instance responsible for what its users post, because they don’t have a financial incentive or the ability to promote misinformation at a massive scale. Twitter does. As Aristotle said, we must treat equals equally, and treat the unequal unequally according to the form and extent of their inequality.