The Supreme Court Battle for Section 230 Has Begun
The future of recommendation algorithms could be at stake:
The first shots have been fired in a Supreme Court showdown over web platforms, terrorism, and Section 230 of the Communications Decency Act. Today, the Supreme Court will hear oral arguments in Gonzales v. Google — one of two lawsuits that are likely to shape the future of the internet.
Gonzalez v. Google and Twitter v. Taamneh are a pair of lawsuits blaming platforms for facilitating Islamic State attacks. The court's final ruling on these cases will determine web services' liability for hosting illegal activity, particularly if they promote it with algorithmic recommendations.
The Supreme Court took up both cases in October: one at the request of a family that's suing Google and the other as a preemptive defense filed by Twitter. They're two of the latest in a long string of suits alleging that websites are legally responsible for failing to remove terrorist propaganda. The vast majority of these suits have failed, often thanks to Section 230, which shields companies from liability for hosting illegal content. But the two petitions respond to a more mixed 2021 opinion from the Ninth Circuit Court of Appeals, which threw out two terrorism-related suits but allowed a third to proceed.
Gonzalez v. Google claims Google knowingly hosted Islamic State propaganda that allegedly led to a 2015 attack in Paris, thus providing material support to an illegal terrorist group. But while the case is nominally about terrorist content, its core question is whether amplifying an illegal post makes companies responsible for it. In addition to simply not banning Islamic State videos, the plaintiffs — the estate of a woman who died in the attack — say that YouTube recommended these videos automatically to others, spreading them across the platform.
Google has asserted that it's protected by Section 230, but the plaintiffs argue that the law's boundaries are undecided. "[Section 230] does not contain specific language regarding recommendations, and does not provide a distinct legal standard governing recommendations," they said in yesterday's legal filing. They're asking the Supreme Court to find that some recommendation systems are a kind of direct publication — as well as some pieces of metadata, including hyperlinks generated for an uploaded video and notifications alerting people to that video. By extension, they hope that could make services liable for promoting it.
I Changed My Mind About Section 230
I Changed My Mind About Section 230:
The man who wrote the book on the '26 words that created the internet' walks us through what we need to know about the online debate to end all online debates.
As part of my job, I cover what goes on in online communities across the internet, which involves some pretty horrible content. You have high-profile people spouting misinformation about antidepressants, covid-19, and "herbal abortion teas" that in some cases are literal poisons. There's also a lot of hate—hate towards the Jewish community, hate towards experts who attempt to correct misinformation, and hate for someone literally breaking their back in a horrible accident. And that's only the tip of the iceberg.
It seemed crazy to me that platforms could get away with allowing content so vile, and in many cases dangerous, on their platforms. It's not like they can't legally do something about it. Under Section 230, a provision in the Communications Decency Act of 1996, online platforms are allowed to moderate objectionable content. Most importantly, though, Section 230 gives platforms a shield that frees them from legal liability for a lot of content that users post.
[...] Despite my strong feelings about how Section 230 has contributed to the internet's toxic landscape, today I'm here to tell you that I don't think Section 230 should be repealed. I came to this conclusion after speaking with Jeff Kosseff, a cybersecurity professor at the U.S. Naval Academy and author of "The Twenty-Six Words That Created the Internet," which analyzes Section 230 in-depth and presents the costs and benefits of protecting online platforms.
Kosseff is widely considered one of the most preeminent Section 230 experts out there. When I shared my concerns about Section 230 and the state of the internet, he told me he agreed that "there are substantial harms out there" that need to be addressed. However, he doesn't think Section 230 is responsible for most of our complaints.
Overall, speaking with Kosseff helped me separate Section 230 from the angry public discourse on both sides of the spectrum.
That doesn't mean I think Section 230 is perfect. Even Kosseff is in favor of modest amendments. I've come to think of the internet like a house, with Section 230 as its foundation. It's a good base, but the house also needs things like a frame and a roof. It needs to be cared for and maintained, repaired, and even modified over time—or else it all comes crashing down.
Read the linked article for Kosseff's views.
(Score: -1, Troll) by Anonymous Coward on Wednesday February 22, @08:30AM (3 children)
Bla bla bla wake me up when they come to take away my rights.
(Score: 0) by Anonymous Coward on Wednesday February 22, @08:39AM
Go back to sleep. The money is on SCOTUS leaving Section 230 alone for now.
(Score: 2) by DeathMonkey on Wednesday February 22, @03:57PM (1 child)
It's just your right to freedom of speech on the internet.
But we all know you're just pretending to care about that one!
(Score: 2) by mcgrew on Thursday February 23, @09:03PM
You both misunderstand. Section 230 allows a provider to allow the speech of others without moderation. However, Farsebook, YouToo and the other giants' algorithms are steering users to terrorist sites. THEY'RE STEERING THEM. If I linked one of Isis' sites, well, I'd lose my hosting but if not, I could be incarcerated,
But I'm not a billion dollar corporation with world-class lawyers that can let me get away with shooting someone in the head on forty second street (did I get that meme right?) or sell billions of dollars in deadly drugs to junkies like the Sacklers did.
Nobody will go to prison n for this. They have too much money. I wonder why the law doesn't hold the railroad's CEO and Board criminally responsible for any crimes that caused the Ohio catastrophe?
Because we are no longer a democracy. The Supreme Court changed America into a plutocracy with the Citizens United ruling. The rich are no longer accountable for anything except pissing off a richer person.
Carbon, The only element in the known universe to ever gain sentience
(Score: 3, Interesting) by driverless on Wednesday February 22, @10:01AM (6 children)
While I welcomed Section 230 when it came into being, it's served its purpose, which was to protect a nascent Internet from being crushed by anyone who felt like it, which at the time was the government and big corporations who felt threatened by it. That was over 25 years ago, it's no longer a fledgling thing, most of it now is controlled by big corporations but it's still run by the original wild-west rules, or lack thereof, that it had quarter of a century ago. It served its purpose, now they need to play by the same rules that everyone else has to follow.
(Score: 3, Insightful) by rigrig on Wednesday February 22, @10:33AM (5 children)
The thing is that BigCorps could actually hire moderation teams to check everything. Sure, it would cost them quite a bit, but on the "upside" it would mean all their small competitors go away, as there is no way e.g. SoylentNews or some private blog could moderate every post 24/7.
I feel the Gonzalez family has a point though: recommending something is not the same as merely hosting it.
No one remembers the singer.
(Score: 5, Insightful) by Ox0000 on Wednesday February 22, @10:53AM (4 children)
There's a difference between being responsible what others post (the content), and curating what is served to you (the algorithm).
The latter is not discussed enough and, I think, at the core of the matter. It's ok to provide some level of protection to platforms regarding what people post, because People (yours truly right here included) are dumb. After all, you cannot block people from saying things in public that you do not wish to hear. And that's the equivalent here: what people post is "free speech".
It becomes a different matter when you start curating content and pushing that onto the eyeballs you live of, that's when you become responsible for the content you push. After all, it's a clear indication that you endorse the content enough to actively go out of your way - via an algorithm, but actively nonetheless, since that algorithm is your user agent - and push it onto people.
At that point, you become responsible, accountable, and liable for the effects that content has on those you pushed it onto.
(Score: 5, Insightful) by PiMuNu on Wednesday February 22, @01:15PM (3 children)
Does that include content moderation here on SN? If many folks mod up a nasty comment, should SN be shut down?
(Score: 4, Offtopic) by Ox0000 on Wednesday February 22, @02:12PM (1 child)
Content moderation is different, that is not "serving you content specifically curated for you individually". So I don't think that's the same.
If the moderation is truly what drives why it is presented to you, and not the profiling that leads to "how we understand what will continue engaging you", then it is different because then it is not an algorithm by SN that pushed content to me, then it is "The Collective" that does. The content moderation affects the posted content largely similarly for every individual. (I am aware of some toggles that accounts have to even make posts rated as "+5 Informative" completely disappear and make "-5 Troll" appear and that's also part of the mental modal that I'm working with here).
The liability, responsibility, and accountability comes in when a system does work to tailor content to you. To specifically pick which content you will see and which you won't. It's less about what's inside the house (the comments), rather than what lures me into the house (the article's title+summary)
So I guess in a way, the Reddit algorithm that bases itself off of up-/down-votes would equally be exempted from this, unless there are aspects of the algorithm there that ignore up/down votes and bypasses these inputs to serve you specific content specifically because they know it will "engage" you.
I appreciate the question, because it made me think a bit deeper about it! Thank you!
(Score: 2) by DeathMonkey on Wednesday February 22, @04:00PM
CDA 230 doesn't say anything about algorithms one way or the other so it's offtopic in this discussion.
(Score: 2) by rigrig on Thursday February 23, @12:38PM
I think the difference lies in "These posts received the most votes" vs "We recommend/suggest you read these posts", regardless of how you decided to recommend those posts.
No one remembers the singer.
(Score: 3, Insightful) by Ox0000 on Wednesday February 22, @10:47AM (5 children)
Algorithms are created by humans. If I create an algorithm that does you harm, I am accountable for that, regardless of the complexity of the algorithm. Whether it has the equivalence of complexity of "Hello world!" or the complexity of the currently most complex LLM (which are all the rage these days), that doesn't matter: you are accountable for your creation.
Throwing your hands up in the air going "yeah, we created this algorithm/model, but _we_ don't know why it does what it does so we're not accountable" screams of incompetence, deflection, dis-ingenuousness, and lack of understanding. It is the defense of ignorance and the equivalent of "no-one could have predicted", except of course that plenty of folks predicted and were gaslit by the creators that those things would never happen.
SCOTUS should not allow the ducking of responsibility just because "it's the algorithm, sir, not me". By allowing this deflection, it would set us on a path to have entities that are accountable (people) and those who can operate without impunity (algorithms), albeit that these latter ones operate on the orders of a human, and are created by said human with all their desires baked in, and who by the powers of transitive relationships now has received immunity from their actions under the law.
It all boils down to one question: who is responsible for the algorithm? Declaring no-one is, or worse, victim-blaming by ruling "well, they went down that rabbit hole themselves and have no-one else to blame for this" is antithetical to being a nation of laws.
In other words: it would be un-American to leave this immunity in place. If you curate something, you are responsible for the curated content, regardless of whether that curation happened by your hand, or by your user agent (the algorithm).
Maybe if 'platforms' were fully accountable for their actions, and the effects of their algorithms, the internet would be a bit nicer. We would probably also move (slightly?) back from centralization to decentralization, due to the need to distribute that risk associated with the accountability (or more accurately: due to the aversion of risk by corporations). All of which would be nice.
(Score: 2) by Ox0000 on Wednesday February 22, @10:56AM (3 children)
Dangit... This part
obviously should have been
When do I get my edit button?
(Score: 2) by jelizondo on Wednesday February 22, @04:04PM (2 children)
No edit button for you!
Learn to think before speaking...
(Score: 3, Funny) by jelizondo on Wednesday February 22, @04:05PM (1 child)
I mean writing...
(Score: 3, Funny) by jelizondo on Wednesday February 22, @04:05PM
I mean hitting the Submit button!
There, see how easy it is?
(Score: -1, Flamebait) by Anonymous Coward on Wednesday February 22, @02:15PM
If the content itself is not illegal where it's hosted, I do not care. These platforms comply with existing laws, DMCA requests, etc. ISIS glorification video doesn't contain anything illegal but brainwashes your son or daughter? Too bad, so sad. If people don't like how the platform is run, they can be noisy about it or leave.
(Score: 2) by NotSanguine on Wednesday February 22, @01:04PM (1 child)
This article is quite timely.
Oral arguments before the Supreme Court in Twitter v. Taamneh [c-span.org] will begin in about two hours (22 February 2023 1500 GMT).
I'll be interested to see how this plays out.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 2) by janrinok on Wednesday February 22, @01:12PM
Well, we do try to keep up with events :-)
(Score: 5, Informative) by Nofsck Ingcloo on Wednesday February 22, @01:48PM (1 child)
(Emphasis mine)
I think the emphasized bit is incorrect.
47 U.S. Code § 230 e) Effect on other laws (1) No effect on criminal law
(again, emphasis mine)
So if it is illegal to assist or promote terrorist activity then the platform is breaking the law irrespetive of whether the decision to do so was made by an employee or by some (human or non-human) agent of the platform.
1984 was not written as an instruction manual.
(Score: 0) by Anonymous Coward on Wednesday February 22, @02:20PM
Terrorists are tolerated on these platforms so that governments can keep a closer eye on them.
It's not clear that even a beheading video should be illegal.