I really don’t like cases like this, nor do I like how much the legal system seems to be pushing “guilty by proxy” rulings for a lot of school shooting cases.
It just feels very very very dangerous and ’going to be bad’ to set this precedent where when someone commits an atrocity, essentially every person and thing they interacted with can be held accountable with nearly the same weight as if they had committed the crime themselves.
Obviously some basic civil responsibility is needed. If someone says “I am going to blow up XYZ school here is how”, and you hear that, yeah, that’s on you to report it. But it feels like we’re quickly slipping into a point where you have to start reporting a vast amount of people to the police en masse if they say anything even vaguely questionable simply to avoid potential fallout of being associated with someone committing a crime.
It makes me really worried. I really think the internet has made it easy to be able to ‘justifiably’ accuse almost anyone or any business of a crime if a person with enough power / the state needs them put away for a time.
This appears to be more the angle of the person being fed an endless stream of hate on social media and thus becoming radicalised.
What causes them to be fed an endless stream of hate? Algorithms. Who provides those algorithms? Social media companies. Why do they do this? To maintain engagement with their sites so they can make money via advertising.
And so here we are, with sites that see you viewed 65 percent of a stream showing an angry mob, therefore you would like to see more angry mobs in your feed. Is it any wonder that shit like this happens?
It’s also known to intentionally show you content that’s likely to provoke you into fights online
Which just makes all the sanctimonious screed about avoiding echo chambers a bunch of horse shit, because that’s not how outside digital social behavior works, outside the net if you go out of your way to keep arguing with people who wildly disagree with you, your not avoiding echo chambers, you’re building a class action restraining order case against yourself.
I’ve long held this hunch that when people’s beliefs are challenged, they tend to ‘dig in’ and wind up more resolute. (I think it’s actual science and I learned that in a sociology class many years ago but it’s been so long I can’t say with confidence if that’s the case.)
Assuming my hunch is right (or at least right enough), I think that side of social media - driving up engagement by increasing discord also winds up radicalizing people as a side effect of chasing profits.
It’s one of the things I appreciate about Lemmy. Not everyone here seems to just be looking for a fight all the time.
I think the design of media products around maximally addictive individually targeted algorithms in combination with content the platform does not control and isn’t responsible for is dangerous. Such an algorithm will find the people most susceptible to everything from racist conspiracy theories to eating disorder content and show them more of that. Attempts to moderate away the worst examples of it just result in people making variations that don’t technically violate the rules.
With that said, laws made and legal precedents set in response to tragedies are often ill-considered, and I don’t like this case. I especially don’t like that it includes Reddit, which was not using that type of individualized algorithm to my knowledge.
Attempts to moderate away the worst examples of it just result in people making variations that don’t technically violate the rules.
The problem then becomes if the clearly defined rules aren’t enough, then the people that run these sites need to start making individual judgment calls based on…well, their gut, really. And that creates a lot of issues if the site in question could be held accountable for making a poor call or overlooking something.
The threat of legal repercussions hanging over them is going to make them default to the most strict actions, and that’s kind of a problem if there isn’t a clear definition of what things need to be actioned against.
It’s the chilling effect they use in China, don’t make it clear what will get you in trouble and then people are too scared to say anything
Just another group looking to control expression by the back door
This is the real shit right here. The problem is that social media companies’ data show that negativity and hate keep people on their website for longer, which means that they view more advertisement compared to positivity.
It is human nature to engage with disagreeable topics moreso than agreeable topics, and social media companies are exploiting that for profit.
We need to regulate algorithms and force them to be open source, so that anybody can audit them. They will try to hide behind “AI” and “trade secret” excuses, but lawmakers have to see above that bullshit.
Unfortunately, US lawmakers are both stupid and corrupt, so it’s unlikely that we’ll see proper change, and more likely that we’ll see shit like “banning all social media from foreign adversaries” when the US-based social media companies are largely the cause of all these problems. I’m sure the US intelligence agencies don’t want them to change either, since those companies provide large swaths of personal data to them.
Nah. This isn’t guilt by association
In her decision, the judge said that the plaintiffs may proceed with their lawsuit, which claims social media companies — like Meta, Alphabet, Reddit and 4chan — ”profit from the racist, antisemitic, and violent material displayed on their platforms to maximize user engagement,”
Which despite their denials the actually know: https://www.nbcnews.com/tech/tech-news/facebook-knew-radicalized-users-rcna3581
Yeah, but algorithmic delivery of radicalizing content seems kinda evil though.
I think the distinction here is between people and businesses. Is it the fault of people on social media for the acts of others? No. Is it the fault of social media for cultivating an environment that radicalizes people into committing mass shootings? Yes. The blame here is on the social medias for not doing more to stop the spread of this kind of content. Because yes even though that won’t stop this kind of content from existing making it harder to access and find will at least reduce the number of people who will go down this path.
Systemic problems require systemic solutions.
Sure, and I get that for like, healthcare. But ‘systemic solutions’ as they pertain to “what constitutes a crime” lead to police states really quickly imo
The article is about lawsuits. Where are you getting this idea that anyone suggested criminalizing people? Stop putting words in other people’s mouths. The most that’s been suggested in this thread is regulating social media algorithms, not locking people up.
Drop the melodrama and paranoia. It’s getting difficult to take you seriously when you keep making shit up about other people’s positions.
I don’t believe you’ve had a lot of experience with the US legal system
Do you not think if someone encouraged a murderer they should be held accountable? It’s not everyone they interacted with, there has to be reasonable suspicion they contributed.
Also I’m pretty sure this is nothing new
Depends on what you mean by “encouraged”. That is going to need a very precise definition in these cases.
And the point isn’t that people shouldn’t be held accountable, it’s that there are a lot of gray areas here, we need to be careful how we navigate them. Irresponsible rulings or poorly implemented laws can destabilize everything that makes the internet worthwhile.
Everyone on lemmy who makes guillotine jokes will enjoy their life sentence I’m sure
Is there currently a national crisis of Jacobins kidnapping oligarchs and beheading them in public I am unaware of?
No
Unfortunately
I didn’t say that at all, and I think you know I didn’t unless you really didn’t actually read my comment.
I am not talking about encouraging someone to murder. I specifically said that in overt cases there is some common sense civil responsibility. I am talking about the potential for the the police to break down your door because you Facebook messaged a guy you’re friends with what your favorite local gun store was, and that guy also happens to listen to death metal and take antidepressants and the state has deemed him a risk factor level 3.
I must have misunderstood you then, but this still seems like a pretty clear case where the platforms, not even people yet did encourage him. I don’t think there’s any new precedent being set here
Rulings often start at the corporation / large major entity level and work their way down to the individual. Think piracy laws. At first, only giant, clear bootlegging operations were really prosecuted for that, and then people torrenting content for profit, and then people torrenting large amounts of content for free - and now we currently exist in an environment where you can torrent a movie or whatever and probably be fine, but also if the criminal justice system wants to they can (and have) easily hit anyone who does with a charge for tens of thousands of dollars or years of jail time.
Will it happen to the vast majority of people who torrent media casually? No. But we currently exist in an environment where if you get unlucky enough or someone wants to punish you for it enough, you can essentially have this massive sentence handed down to you almost “at random”.
Also worth remembering, this opens up avenues for lawsuits on other types of “harm”.
We have states that have outlawed abortion. What do those sites do when those states argue social media should be “held accountable” for all the women who are provided information on abortion access through YouTube, Facebook, reddit, etc?
I dunno about social media companies but I quite agree that the party who got the gunman the gun should share the punishment for the crime.
Firearms should be titled and insured, and the owner should have an imposed duty to secure, and the owner ought to face criminal penalty if the firearm titled to them was used by someone else to commit a crime, either they handed a killer a loaded gun or they inadequately secured a firearm which was then stolen to be used in committing a crime, either way they failed their responsibility to society as a firearm owner and must face consequences for it.
This guy seems to have bought the gun legally at a gun store, after filling out the forms and passing the background check. You may be thinking of the guy in Maine whose parents bought him a gun when he was obviously dangerous. They were just convicted of involuntary manslaughter for that, iirc.
Yup, I was just addressing the point of tangential arrest, sometimes it is well justified.
Well you were talking about charging the gun owner if someone else commits a crime with their gun. That’s unrelated to this case where the shooter was the gun owner.
The lawsuit here is about radicalization but if we’re pursuing companies who do that, I’d start with Fox News.
If you lend your brother, who you know is on antidepressants, a long extension cord he tells you is for his back patio - and he hangs himself with it, are you ready to be accused of being culpable for your brothers death?
Did he also use it as improvised ammunition to shoot up the local elementary school with the chord to warrant it being considered a firearm?
I’m more confused where I got such a lengthy extension chord from! Am I an event manager? Do I have generators I’m running cable from? Do I get to meet famous people on the job? Do I specialize in fairground festivals?
…. Aside from everything else, are you under the impression that a 10-15 ft extension cord is an odd thing to own…?
Oh, it turns out an extension cord has a side use that isn’t related to its primary purpose. What’s the analogous innocuous use of a semiautomatic handgun?
Self defense? You don’t have to be a 2A diehard to understand that it’s still a legal object. What’s the “innocuous use” of a VPN? Or a torrenting client? Should we imprison everyone who ever sends a link about one of these to someone who seems interested in their use?
You’re deliberately ignoring the point that the primary use of a semiautomatic pistol is killing people, whether self-defense or mass murder.
Should you be culpable for giving your brother an extension cord if he lies that it is for the porch? Not really.
Should you be culpable for giving your brother a gun if he lies that he needs it for self defense? IDK the answer, but it’s absolutely not equivalent.
It is a higher level of responsibility, you know lives are in danger if you give them a tool for killing. I don’t think it’s unreasonable if there is a higher standard for loaning it out or leaving it unsecured.
“Sorry bro. I’d love to go target shooting with you, but you started taking Vynase 6 months ago and I’m worried if you blow your brains out the state will throw me in prison for 15 years”.
Besides, youre ignoring the point. This article isn’t about a gun, it’s about basically “this person saw content we didn’t make on our website”. You think that wont be extended to general content sent from a person to another? That if you send some pro-Palestine articles to your buddy and then a year or two later your buddy gets busted at an anti-Zionist rally and now you’re a felon because you enabled that? Boy, that would be an easy way for some hypothetical future administrations to control speech!!
You might live in a very nice bubble, but not everyone will.
So you need a strawman argument transitioning from loaning a weapon unsupervised to someone we know is depressed. Now it is just target shooting with them, so distancing the loan aspect and adding a presumption of using the item together.
This is a side discussion. You are the one who decided to write strawman arguments relating guns to extension cords, so I thought it was reasonable to respond to that. It seems like you’re upset that your argument doesn’t make sense under closer inspection and you want to pull the ejection lever to escape. Okay, it’s done.
The article is about a civil lawsuit, nobody is going to jail. Nobody is going to be able to take a precedent and sue me, an individual, over sharing articles to friends and family, because the algorithm is a key part of the argument.
I don’t think you understand the issue. I’m very disappointed to see that this is the top comment. This wasn’t an accident. These social media companies deliberately feed people the most upsetting and extreme material they can. They’re intentionally radicalizing people to make money from engagement.
They’re absolutely responsible for what they’ve done, and it isn’t “by proxy”, it’s extremely direct and deliberate. It’s long past time that courts held them liable. What they’re doing is criminal.
I do. I just very much understand the extent that the justice system will take decisions like this and utilize them to accuse any person or business (including you!) of a crime that they can then “prove” they were at fault for.
Nice, now do all regigions and churches next
Excuse me what in the Kentucky fried fuck?
As much as everyone says fuck these big guys all day this hurts everyone.
I agree with you, but … I was on reddit since the Digg exodus. It always had it’s bad side (violentacrez, jailbait, etc), but it got so much worse after GamerGate/Ellen Pao - the misogyny became weaponized. And then the alt-right moved in, deliberately trying to radicalize people, and we worked so. fucking. hard to keep their voices out of our subreddits. And we kept reporting users and other subreddits that were breaking rules, promoting violence and hatred, and all fucking spez would do is shrug and say, “hey it’s a free speech issue”, which was somewhere between “hey, I agree with those guys” and “nah, I can’t be bothered”.
So it’s not like this was something reddit wasn’t aware of (I’m not on Facebook or YouTube). They were warned, repeatedly, vehemently, starting all the way back in 2014, that something was going wrong with their platform and they need to do something. And they deliberately and repeatedly choose to ignore it, all the way up to the summer of 2021. Seven fucking years of warnings they ignored, from a massive range of users and moderators, including some of the top moderators on the site. And all reddit would do is shrug it’s shoulders and say, “hey, free speech!” like it was a magic wand, and very occasionally try to defend itself by quoting it’s ‘hate speech policy’, which they invoke with the same regular repetitiveness and ‘thoughts and prayers’ inaction as a school shooting brings. In fact, they did it in this very article:
In a statement to CNN, Reddit said, “Hate and violence have no place on Reddit. Our sitewide policies explicitly prohibit content that promotes hate based on identity or vulnerability, as well as content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or group of people. We are constantly evaluating ways to improve our detection and removal of this content, including through enhanced image-hashing systems, and we will continue to review the communities on our platform to ensure they are upholding our rules.”
As someone who modded for a number of years, that’s just bullshit.
Edit: fuck spez.
hate and violence are bad unless we make money, then its ok.
The first thing that came to mind when I saw Reddit was The_Donald.
So, I can see a lot of problems with this. Specifically the same problems that the public and regulating bodies face when deciding to keep or overturn section 230. Free speech isn’t necessarily what I’m worried about here. Mostly because it is already agreed that free speech is a construct that only the government is actually beholden to. Message boards have and will continue to censor content as they see fit.
Section 230 basically stipulates that companies that provide online forums (Meta, Alphabet, 4Chan etc) are not liable for the content that their users post. And part of the reason it works is because these companies adhere to strict guidelines in regards to content and most importantly moderation.
Section 230©(2) further provides “Good Samaritan” protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
Reddit, Facebook, 4Chan et all do have rules and regulations they require their users to follow in order to post. And for the most part the communities on these platforms are self policing. There just aren’t enough paid moderators to make it work otherwise.
That being said, the real problem is that this really kind of indirectly challenges section 230. Mostly because it very barely skirts around whether the relevant platforms can themselves be considered publishers, or at all responsible for the content the users post and very much attacks how users are presented with content to keep them engaged via algorithms (which is directly how they make their money).
Even if the lawsuits fail, this will still be problematic. It could lead to draconian moderation of what can be posted and by whom. So now all race related topics regardless of whether they include hate speech could be censored for example. Politics? Censored. The discussion of potential new laws? Censored.
But I think it will be worse than that. The algorithm is what makes the ad space these companies sell so valuable. And this is a direct attack on that. We lack the consumer privacy protections to protect the public from this eventuality. If the ad space isn’t valuable the data will be. And there’s nothing stopping these companies from selling user data. Some of them already do. What these apps do in the background is already pretty invasive. This could lead to a furthering of that invasive scraping of data. I don’t like that.
That being said there is a point I agree with. These companies literally do make their algorithm addictive and it absolutely will push content at users. If that content is of an objectionable nature, so long as it isn’t outright illegal, these companies do not care. Because they do gain from it monetarily.
What we actually need is data privacy protections. Holding these companies accountable for their algorithms is a good idea. But I don’t agree that this is the way to do that constructively. It would be better to flesh out 230 as a living document that can change with the times. Because when it was written the Internet landscape was just different.
What I would like to see is for platforms to moderate content posted and representing itself as fact. We don’t see that nearly enough on places like reddit. Users can post anything as fact and the echo chambers will rally around it if they believe it. It’s not really incredibly difficult to radicalise a person. But the platforms aren’t doing that on purpose. The other users are, and the algorithms are helping them.
Moderation is already draconian, interact with any gen Z and you gonna know what goon, corn, unalive, (crime) in Minecraft, actually mean.
These aren’t slangs, this is like a second language developed to evade censorship from those platforms, things will only get worse.
Sweet, I’m sure this won’t be used by AIPAC to sue all the tech companies for causing October 7th somehow like unrwa and force them to shutdown or suppress all talk on Palestine. People hearing about a genocide happening might radicalize them, maybe we could get away with allowing discussion but better safe then sorry, to the banned words list it goes.
This isn’t going to end in the tech companies hiring a team of skilled moderators who understand the nuance between passion and radical intention trying to preserve a safe space for political discussion, that costs money. This is going to end up with a dictionary of banned and suppressed words.
This is going to end up with a dictionary of banned and suppressed word
Do you have some examples?
It’s already out there. For example you can’t use the words “Suicide” or “rape” or “murder” in YouTube, TikTok etc. even when the discussion is clearly about trying to educate people. Heck, you can’t even mention Onlyfans on Twitch…
Heck, you can’t even mention Onlyfans on Twitch…
They don’t like users mentioning their direct competition
YouTube feeds me so much right wing bullshit I’m constantly marking it as not interested. It’s a definite problem.
Add Fox news and Trump rallies to the list.
Don’t forget Marilyn Manson and videogames.
/s
Idk why you’re getting downvoted for an obvious joke lol
Because it’s not funny or relevant and is an attempt to join two things - satanic panic with legal culpability in social media platforms.
Not relevant?
Metal music and videos games have been blamed for mass shootings before.
And this is neither of those things. This is something much more tangible, with actual science behind it.
Yes, that exactly is the point.
How people who supposedly care for children’s safety are willing to ignore science and instead choose to hue and cry about bullshit stuff they perceive (or told by their favourite TV personality) as evil.
Have you got it now? Or should I explain it further?
Didn’t expect Lemmy to have people who lack reading comprehension.
This is the best summary I could come up with:
A New York state judge on Monday denied a motion to dismiss a lawsuit against several social media companies alleging the platforms contributed to the radicalization of a gunman who killed 10 people at a grocery store in Buffalo, New York in 2022, court documents show.
In her decision, the judge said that the plaintiffs may proceed with their lawsuit, which claims social media companies — like Meta, Alphabet, Reddit and 4chan — ”profit from the racist, antisemitic, and violent material displayed on their platforms to maximize user engagement,” including the time then 18-year-old Payton Gendron spent on their platforms viewing that material.
“They allege they are sophisticated products designed to be addictive to young users and they specifically directed Gendron to further platforms or postings that indoctrinated him with ‘white replacement theory’,” the decision read.
“It is far too early to rule as a matter of law that the actions, or inaction, of the social media/internet defendants through their platforms require dismissal,” said the judge.
“While we disagree with today’s decision and will be appealing, we will continue to work with law enforcement, other platforms, and civil society to share intelligence and best practices,” the statement said.
We are constantly evaluating ways to improve our detection and removal of this content, including through enhanced image-hashing systems, and we will continue to review the communities on our platform to ensure they are upholding our rules.”
The original article contains 407 words, the summary contains 229 words. Saved 44%. I’m a bot and I’m open source!
I just would like to show something about Reddit. Below is a post I made about how Reddit was literally harassing and specifically targeting me, after I let slip in a comment one day that I was sober - I had previously never made such a comment because my sobriety journey was personal, and I never wanted to define myself or pigeonhole myself as a “recovering person”.
I reported the recommended subs and ads to Reddit Admins multiple times and was told there was nothing they could do about it.
I posted a screenshot to DangerousDesign and it flew up to like 5K+ votes in like 30 minutes before admins removed it. I later reposted it to AssholeDesign where it nestled into 2K+ votes before shadow-vanishing.
Yes, Reddit and similar are definitely responsible for a lot of suffering and pain at the expense of humans in the pursuit of profit. After it blew up and front-paged, “magically” my home page didn’t have booze related ads/subs/recs any more! What a totally mystery how that happened /s
The post in question, and a perfect “outing” of how Reddit continually tracks and tailors the User Experience specifically to exploit human frailty for their own gains.

Edit: Oh and the hilarious part that many people won’t let go (when shown this) is that it says it’s based on my activity in the Drunk reddit which I had never once been to, commented in, posted in, or was even aware of. So that just makes it worse.
Its not reddit if posts don’t get nuked or shadowbanned by literal sitewide admins
Yes I was advised in the removal notice that it had been removed by the Reddit Administrators so that they could keep Reddit “safe”.
I guess their idea of “safe” isn’t 4+ million users going into their privacy panel and turning off exploitative sub recommendations.
Idk though I’m just a humble bird lawyer.
Yeah this happens a lot more than people think. I used to work at a hotel, and when the large sobriety group got together yearly, they changed bar hours from the normal hours, to as close to 24/7 as they could legally get. They also raised the prices on alcohol.
Back when I was on reddit, I subscribed to about 120 subreddits. Starting a couple years ago though, I noticed that my front page really only showed content for 15-20 subreddits at a time and it was heavily weighted towards recent visits and interactions.
For example, if I hadn’t visited r/3DPrinting in a couple weeks, it slowly faded from my front page until it disappeared all together. It was so bad that I ended up writing a browser automation script to visit all 120 of my subreddits at night and click the top link. This ended up giving me a more balanced front page that mixed in all of my subreddits and interests.
My point is these algorithms are fucking toxic. They’re focused 100% on increasing time on page and interaction with zero consideration for side effects. I would love to see social media algorithms required by law to be open source. We have a public interest in knowing how we’re being manipulated.
I used google news phone widget years ago and clicked on a giant asteroid article, and for whatever reason my entire feed became asteroid/meteor articles. Its also just such a dumb way to populate feeds.
YouTube does the exact same thing.
thats why i always use youtube by subscribed first, then only delve into regular front page if theres nothing interesting in my subscriptions
What an excellent presedent to set cant possibly see how this is going to become authoritarian. Ohh u didnt report someone ur also guilty cant see any problems with this.
Ohh u didnt report someone ur also guilty cant see any problems with this.
That’s… not what this is about, though?
“However, plaintiffs contend the defendants’ platforms are more than just message boards,” the court document says. “They allege they are sophisticated products designed to be addictive to young users and they specifically directed Gendron to further platforms or postings that indoctrinated him with ‘white replacement theory’,” the decision read.
This isn’t about mandated reporting, it’s about funneling impressionable people towards extremist content.
And they profit from it. That’s mentioned there too, and it makes it that much more infuriating. They know exactly what they’re doing, and they do it on purpose, for money.
And at the end of the day, they’ll settle (who are the plaintiffs? Article doesn’t say) or pay some relatively inconsequential amount, and they’ll still have gained a net benefit from it. Another case of cost-of-doing-business.
Would’ve been free without the lawsuit even. Lives lost certainly aren’t factored in otherwise.
Youtube Shorts is the absolute worst for this. Just recently it’s massively trying to push transphobic BS at me, and I cannot figure out why. I dislike, report and “do not recommend this channel” every time, and it just keeps shoving more at me. I got a fucking racist church sermon this morning. it’s broken!
Don’t dislike it just hit do not recommend, also don’t open comments - honestly the best way is just to skip past as fast as you can when you set one, the lower time with it on your screen YNt less the algo thinks you want it.
I never really see that on YouTube unless I’ve been on related topics recently and it goes pretty quick when you don’t interact. Yes it’s shifty but they’re working on a much better system using natural language with an llm but it’s a complex problem
I am not discounting anyone’s experience. I am not saying this isn’t happening. But I don’t see it.
LiberalGunNut™ here! You would think watching gun related videos would lead me down a far-right rabbit hole. Here’s my feed ATM.
Meh. History, gun comparisons, chemistry, movies, whatever. Nothing crazy. (Don’t watch Brandon any longer, got leaning too right, too political. Video’s about his bid for a Congressional seat in Texas. Not an election conspiracy thing. Don’t care.)
If anyone can help me understand, I’m listening. Maybe I shy away from the nutcase shit so hard that YouTube “gets” me? Honestly don’t get it.
So that looks like main long form content. I’m specifically talking about youtube shorts which is Google’s version of TikTok
Imagine watchibg let alone even having the option for shorts. Get newpipe there is a sponsorblock version on fdroid no shorts no google tracking no nonsence u dont get comments tho but whatever. It also supports peertube which is nice.
Report for what? Sure disagree with them about their bullshit but i dont see why u need to report someone just cos u disagree with their opinions.
Imagine watchibg let alone even having the option for shorts.
I like shorts for the most part
Report for what?
Misinformation and hatespeech mostly. They have some crazy, false pseudoscience to back their “opinions” and they express them violently. Like it or not, these videos “promote hatred against a protected group” and are expressly against youtube TOS. Reporting them is 100% appropriate.
U can make any common practice and pillar of capitalism sound bad by using the words impressionable and extremist.
If we remove that it become: funnelling a market towards the further consumption of your product. I.e. marketing
And yes of cause the platforms are designed to be addictive and are effective at indoctranation but why is that only a problem for certain ideologies shouldnt we be stopping all ideologies from practicing indoctranation of impressionable people should we not be guiding people to as many viewpoints as possible to teach them to think not to swallow someone elses ideas and spew them back out.
I blame Henry Ford for this whole clusterfuck he lobbied the education system to manufacture an obedient consumer market and working class that doesnt think for itself but simply swallows what its told. The education system is the problem anything else is treating the symptoms not the disease.
If we remove that it become: funnelling a market towards the further consumption of your product. I.e. marketing
And if a company’s marketing campaign is found to be indirectly responsible for a kid shooting up a grocery store, I’m sure we’ll be seeing a repeat of this with that company being the one with a court case being brought against them, what even is this argument?
That means that the government is injecting itself on deciding what “extremist” is. I do not trust them to do that wisely. And even if I did trust them, it is immoral for the state to start categorizing and policing ideologies.
Do you understand you’re arguing for violent groups instigating a race war?
Like, even if you’re ok with white people doing it, you’re also saying ISIS, MS13, any fucking group can’t be labeled violent extremists…
Some “ideologies” need to be fucking policed
anarchists have had to deal with this for over a century. the state can go fuck itself.
Some “ideologies” need to be fucking policed
Someone wants to start with yours, and they have more support than you know. Be careful what you wish for.
Guess we shouldn’t ever do anything about anything, ever.
Big difference between policing actions and policing thoughts. Declaring some thoughts as verboten and subject to punishment or liability is bad.
It’s insane you’re being downvoted by people who would be the first ones silenced.
You really think they’re going to use this for himophobes and racists instead of anyone calling for positive socia6 change?
Did you not see any of history?
Ur missing the point violence should absolutly be policed. Words ideas ideology hell no let isis, ms13, the communists, the nazis, the vegans etc etc etc say what they want. They are all extremists by some definition let them discuss let them argue and the second someone does something violent lock em for the rest of their lives simple.
What you are suggesting is the policing of ideology to prevent future crime their is an entire book about where that leads to said book simply calls this concept thought crime.
That is generally what Governments do. They write laws that say … you can do this but not that. If you do this thats illegal and you will be convicted. Otherwise you wouldnt be able to police things like Mafia and drug cartels. Even in the US their freedom of speech to conspire to committe crimes is criminalised. There is no difference between that and politically motivated ‘extremists’ who conspire to commit crimes. The idealogy is not criminalised the acts that groups plan or conduct are. You are totally fine saying . I dont like x group.
What its not ok to say is . Lets go out and kill people from x.group.
The problem is that social media sites use automated processes to decide which messages to put in front of users in the fundamentally same way that a newspaper publisher decides which letters to the editor they decide to put in their newspaper.
Somehow though Tech companies have argued that because their is no limit on how many posts they can communicate amd hence theoretically they arent deciding what they put in and what they done, that their act of putting some at the top of people’s lists so they are seen is somehow different to the act of the newspaper publisher including a particular letter or not …but the outcome is the same The letter or post is seen by people or not.
Tech companies argue they are just a commutation network but I never saw a telephone, postal or other network that decided which order you got your phone calls, letters or sms messages. They just deliver what is sent in the order it was sen.
commercial social media networks are publishers with editorial control - editorial control is not only inclusion/exclusion but also prominence
There is a fundamental difference in Lemmy or Mastodon in that those decisions (except for any moderation by individual server admins) dont promote or demote any post so therefore dont have any role in whether a user sees a post or not.
The government is already the one who makes that decision. The only thing new here is a line being drawn with regards to social media’s push towards addiction and echo-chamberism.
umm… isnt the government or rather the judikative already deciding what extremist is?
How would specifically this be different?I can understand the problems thos causes for the platforms, but the government injecting decisions is something you focus on?
Not to forget the many other places they inject themselves… one could say your daily lifes because… careful now… you live in the country with a government, whaaat?
deleted by creator
You could make a good point with better spelling, grammar, and word choice.
Yes i could. I could spend the extra 30seconds fixing it or i could not bother and still have my point comprehendable.
unpossible
Good.
There should be no quarter for fascists, violent racist or their enablers.
Conspiracy for cash isn’t a free speech issue.
for fascists, violent racist or their enablers.
Take a good long look in the mirror (and a dictionary printed before 2005) before you say things like this.
Fuck off, symp.
Glow harder.
Are the platforms guilty or are the users that supplied the radicalized content guilty? Last I checked, most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves.
most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves
Its their job to block that content before it reaches an audience, but since thats how they make their money, they dont or wont do that. The monetization of evil is the problem, those platforms are the biggest perpetrators.
Its their job to block that content before it reaches an audience
The problem is (or isn’t, depending on your perspective) that it is NOT their job. Facebook, YouTube, and Reddit are private companies that have the right to develop and enforce their own community guidelines or terms of service, which dictate what type of content can be posted on their platforms. This includes blocking or removing content they deem harmful, objectionable, or radicalizing. While these platforms are protected under Section 230 of the Communications Decency Act (CDA), which provides immunity from liability for user-generated content, this protection does not extend to knowingly facilitating or encouraging illegal activities.
There isn’t specific U.S. legislation requiring social media platforms like Facebook, YouTube, and Reddit to block radicalizing content. However, many countries, including the United Kingdom and Australia, have enacted laws that hold platforms accountable if they fail to remove extremist content. In the United States, there have been proposals to amend or repeal Section 230 of CDA to make tech companies more responsible for moderating the content on their sites.
Repealing Section 230 would actually have the opposite effect, and lead to less moderation as it would incentivize not knowing about the content in the first place.
I can’t see that. Not knowing about it would be impossible position to maintain since you would be getting reports. Now you might say they will disable reports which they might try but they have to do business with other companies who will require that they do. Apple isn’t going to let your social media app on if people are yelling at Apple about the child porn and bomb threats on it, AWS will kick you as well, even Cloudflare might consider you not worth the legal risk. This has already happened multiple times even with section 230 providing a lot of immunity to these companies. Without that immunity they would be even more likely to block.
The argument could be made (and probably will be) that they promote those activities by allowing their algorithms to promote that content. Its’s a dangerous precedent to set, but not unlikely given the recent rulings.
Yeah i have made that argument before. By pushing content via user recommended lists and auto play YouTube becomes a publisher and meeds to be held accountable
Not how it works. Also your use of “becomes a publisher” suggests to me that you are misinformed - as so many people are - that there is some sort of a publisher vs platform distinction in Section 230. There is not.
Oh no i am aware of that distinction. I just think it needs to go away and be replaced.
Currently sec 230 treats websites as not responsible for user generated content. Example, if I made a video defaming someone I get sued but YouTube is in the clear. But if The New York Times publishes an article defaming someone they get sued not just the writer.
Why? Because NYT published that article but YouTube just hosts it. This publisher platform distinction is not stated in section 230 but it is part of usa law.
This is frankly bizarre. I don’t understand how you can even write that and reasonably think that the platform hosting the hypothetical defamation should have any liability there. Like this is actually a braindead take.
I gave up reporting on major sites where I saw abuse. Stuff that if you said that in public, also witnessed by others, you’ve be investigated. Twitter was also bad for responding to reports with “this doesnt break our rules” when a) it clearly did and b) probably a few laws.
I gave up after I was told that people DMing me photographs of people committing suicide was not harassment but me referencing Yo La Tengo’s album “I Am Not Afraid Of You And I Will Beat Your Ass” was worthy of a 30 day ban
I remember one time somebody tweeted asking what the third track off Whole Lotta Red and I watched at least 50 people get perma’d before my eyes.
The third track is named Stop Breathing.
I TAKE MY SHIRT OFF AND ALL THE HOES STOP
BREATHIN’accessing their Twitter accounts WHEH? 🧛♂️🦇🩸sLatt! +*
It’s ALWAYS someone else’s fault.























