Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday September 29 2021, @02:06PM   Printer-friendly

From: Techdirt

Content moderation is a can of worms. For Internet infrastructure intermediaries, it’s a can of worms that they are particularly poorly positioned to tackle. And yet Internet infrastructure elements are increasingly being called on to moderate content—content they may have very little insight into as it passes through their systems.

The vast majority of all content moderation happens on the “top” layer of the internet—such as social media and websites, places online that are the most visible to an average user. If a post violates a platform’s terms of service, the post is usually blocked or taken down. If a user continues to post content that violates a platform’s terms, then the user’s account is often suspended. These types of content moderation practices are increasingly understood by average Internet users.

Less often discussed or understood are the types of services facilitated via actors in the Internet ecosystem that both support and exist under the upper content layers of the Internet.

Many of these companies host content, supply cloud services, register domain names, provide web security, and many more features of what could be described as the plumbing services of the Internet. But instead of water and sewage, the Internet deals in digital information. In theory, these “infrastructure intermediaries” could moderate content, but for reasons of convention, legitimacy, and practicality they don’t usually do it on purpose.

However, some notable recent exemptions may be setting precedent.

Amazon Web Services removed Wikileaks from their system in 2010. Cloudflare kicked off the Daily Stormer. An Italian court ordered Cloudflare to remove a copyright infringing site. Amazon suspended hosting for Parler.

What does all this mean? Infrastructure may have the means to perform “content moderation,” but it is critical to consider the effects of this trend to prevent harming the Internet’s underlying architecture. In principle, Internet service providers, registries, cloud providers and other infrastructure intermediaries should be agnostic to the content which passes over their systems.

[...] Policymakers must consider the unintended impacts of content moderation proposals on infrastructure intermediaries. Legislating without due diligence to understand the impact on the unique role of these intermediaries could be detrimental to the success of the Internet, and an increasing portion of the global economy that relies on Internet infrastructure for daily life and work.

[...] Conducting impact assessments prior to regulation is one way to mitigate the risks. The Internet Society created the Internet Impact Assessment Toolkit to help policymakers and communities assess the implications of change—whether those are policy interventions or new technologies.

Policy changes that impact the different layers of the Internet are inevitable. But we must all ensure that these policies are well crafted and properly scoped to keep the Internet working and successful for everyone.

Austin Ruckstuhl is a Project & Policy Advisor at the Internet Society where he works on Internet impact assessments, defending encryption and supporting Community Networks as access solutions.

Should online content be controlled ? If yes, Is there a better way to censor online content and who should have the authority to do so ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Thexalon on Thursday September 30 2021, @02:24AM (3 children)

    by Thexalon (636) on Thursday September 30 2021, @02:24AM (#1183011)

    I'm going to present a somewhat more concrete example of this: Alice, who is a minor, goes onto social media platform X and says "Tomorrow at school, everybody throw rocks at Bob." A bunch of kids all decide to actually throw rocks at Bob the next day, and Bob receives significant injuries because some of those rocks were heavy.

    The two legal questions are:
    1. Who should be charged with (possibly felony due to the use of a weapon) assault?
    - I think we can all easily agree that everybody who threw rocks at Bob assaulted him.
    - Alice is legally in an interesting spot as far as her criminal intent: A prosecutor could argue Alice solicited the assault or incited the violence, but on the other hand she could successfully argue that she was using a bit of hyperbole and didn't expect anybody to actually throw rocks.
    - As for X, Inc's criminal liability, you'd have to prove that somebody working for X even knew about the contents of the post and made sure the rock-throwers saw it, because criminal liability of this sort requires a showing of intent rather than negligence, so they're probably off the hook for the assault charge.

    2. Who should be civilly liable for any ensuing medical bills and emotional distress?
    - Once again, the people who threw the rocks should be in trouble, no question, although they're probably kids too, so they don't have much by way of assets.
    - It's easier to go after Alice than with the criminal case: Someone suing her is now in a position to argue that she should have known that her post could potentially lead to people actually throwing rocks, and thus she was negligent in her duty to be responsible.
    - The way the law currently works, X, Inc is off the hook for this. The reason probably has to do with the fact that you'd be demanding a human review every post, or demanding that X has to have an algorithm sophisticated enough to know and appreciate the difference between "Throw rocks at Bob" (as a threat to a real person Bob) and "Bob, throw me the rock" (as part of a story about basketball) and "Throw rocks at Bob" (as part of a video game walkthrough), and this post using the threat to a fictional Bob as a hypothetical example to explain a point. And to make things more complicated, note that none of the words I used, on their own, are really triggering of attention by an algorithm: Lots of people are named Bob, there are lots of perfectly legitimate non-criminal reasons to throw things, and lots of non-criminal things you can say about rocks. But on the other hand, X Inc probably isn't as judgment-proof as the rest of them, Bob should be able to collect from somebody, and it's true that their software ideally didn't show that post to the rock-throwers.

    So I'm going with: It's necessarily complicated, there's no easy blanket answer, and reasonable people can disagree on what the right answer is because it's hard for humans, much less designing an algorithm in advance, to get things right in even this example.

    And I'm also not going to forget that the push to make the platform liable for Alice's post came right about the time that some prominent politicians noticed that Alice could post on social media that said politician sucked, and the politicians were frustrated they couldn't sue either Alice or the social media platform for that comment and have a snowball's chance of winning. I hope we can agree that criticizing political officeholders, especially for their official or public acts, without fear of criminal or civil liability is a fundamental right in any country that could reasonably consider itself "free".

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by slinches on Thursday September 30 2021, @08:59PM (2 children)

    by slinches (5049) on Thursday September 30 2021, @08:59PM (#1183194)

    - The way the law currently works, X, Inc is off the hook for this. The reason probably has to do with the fact that you'd be demanding a human review every post, or demanding that X has to have an algorithm sophisticated enough to know and appreciate the difference between "Throw rocks at Bob" (as a threat to a real person Bob) and "Bob, throw me the rock" (as part of a story about basketball) and "Throw rocks at Bob" (as part of a video game walkthrough), and this post using the threat to a fictional Bob as a hypothetical example to explain a point. And to make things more complicated, note that none of the words I used, on their own, are really triggering of attention by an algorithm: Lots of people are named Bob, there are lots of perfectly legitimate non-criminal reasons to throw things, and lots of non-criminal things you can say about rocks. But on the other hand, X Inc probably isn't as judgment-proof as the rest of them, Bob should be able to collect from somebody, and it's true that their software ideally didn't show that post to the rock-throwers.

    So I'm going with: It's necessarily complicated, there's no easy blanket answer, and reasonable people can disagree on what the right answer is because it's hard for humans, much less designing an algorithm in advance, to get things right in even this example.

    I think it's pretty simple. If the site has an algorithm sophisticated enough (or perform the human review) to be able to distinguish between different messages and make editorial decisions on the content they want to promote or suppress, then they should also be responsible for detecting and removing the content that is harmful. However, if they are either not capable or choose to provide an open space for public discourse (i.e. don't exercise editorial control over content posted) they should be protected from liability.

    • (Score: 2) by Thexalon on Thursday September 30 2021, @09:51PM (1 child)

      by Thexalon (636) on Thursday September 30 2021, @09:51PM (#1183201)

      That's not simple at all: You now have to evaluate, in a court of law, exactly how sophisticated the algorithm is, and in order to do that you have to force the disclosure of trade secrets. And then there's going to be the arguing over whether the company was negligent because they could have made the algorithm smart enough to figure this out but didn't because they didn't care about Bob or other victims.

      And of course no site is going to perform the human review without legal cause for doing so, since it's expensive and time-consuming.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by slinches on Friday October 01 2021, @12:40AM

        by slinches (5049) on Friday October 01 2021, @12:40AM (#1183247)

        All that needs to be evaluated is whether there's editorial control, the method is not important. If they pick and choose content then they should own the liability for all of it.