Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Wednesday September 29 2021, @02:06PM   Printer-friendly

From: Techdirt

Content moderation is a can of worms. For Internet infrastructure intermediaries, it’s a can of worms that they are particularly poorly positioned to tackle. And yet Internet infrastructure elements are increasingly being called on to moderate content—content they may have very little insight into as it passes through their systems.

The vast majority of all content moderation happens on the “top” layer of the internet—such as social media and websites, places online that are the most visible to an average user. If a post violates a platform’s terms of service, the post is usually blocked or taken down. If a user continues to post content that violates a platform’s terms, then the user’s account is often suspended. These types of content moderation practices are increasingly understood by average Internet users.

Less often discussed or understood are the types of services facilitated via actors in the Internet ecosystem that both support and exist under the upper content layers of the Internet.

Many of these companies host content, supply cloud services, register domain names, provide web security, and many more features of what could be described as the plumbing services of the Internet. But instead of water and sewage, the Internet deals in digital information. In theory, these “infrastructure intermediaries” could moderate content, but for reasons of convention, legitimacy, and practicality they don’t usually do it on purpose.

However, some notable recent exemptions may be setting precedent.

Amazon Web Services removed Wikileaks from their system in 2010. Cloudflare kicked off the Daily Stormer. An Italian court ordered Cloudflare to remove a copyright infringing site. Amazon suspended hosting for Parler.

What does all this mean? Infrastructure may have the means to perform “content moderation,” but it is critical to consider the effects of this trend to prevent harming the Internet’s underlying architecture. In principle, Internet service providers, registries, cloud providers and other infrastructure intermediaries should be agnostic to the content which passes over their systems.

[...] Policymakers must consider the unintended impacts of content moderation proposals on infrastructure intermediaries. Legislating without due diligence to understand the impact on the unique role of these intermediaries could be detrimental to the success of the Internet, and an increasing portion of the global economy that relies on Internet infrastructure for daily life and work.

[...] Conducting impact assessments prior to regulation is one way to mitigate the risks. The Internet Society created the Internet Impact Assessment Toolkit to help policymakers and communities assess the implications of change—whether those are policy interventions or new technologies.

Policy changes that impact the different layers of the Internet are inevitable. But we must all ensure that these policies are well crafted and properly scoped to keep the Internet working and successful for everyone.

Austin Ruckstuhl is a Project & Policy Advisor at the Internet Society where he works on Internet impact assessments, defending encryption and supporting Community Networks as access solutions.

Should online content be controlled ? If yes, Is there a better way to censor online content and who should have the authority to do so ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by DeathMonkey on Wednesday September 29 2021, @03:15PM (29 children)

    by DeathMonkey (1380) on Wednesday September 29 2021, @03:15PM (#1182765) Journal

    Us on the intellectually superior left know the difference between common carriers and content providers.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 5, Insightful) by Tork on Wednesday September 29 2021, @03:31PM (13 children)

    by Tork (3914) Subscriber Badge on Wednesday September 29 2021, @03:31PM (#1182778)
    It's also worth mentioning that whenever they talk about being 'de-platformed' they ALWAYS omit the reason* why this has suddenly popped up even though all the players have been around for over a decade. That reason is ALSO why they'll never get what they claim they want, both because it won't solve the actual problems and because they won't like the taste of their own medicine.
    * Violence/Death.
    --
    🏳️‍🌈 Proud Ally 🏳️‍🌈
    • (Score: 0) by Anonymous Coward on Wednesday September 29 2021, @04:50PM (7 children)

      by Anonymous Coward on Wednesday September 29 2021, @04:50PM (#1182812)
      Are violence and death alone good reasons to kick someone off? We make R-rated movies all the time.
      • (Score: 2) by Tork on Wednesday September 29 2021, @04:56PM (5 children)

        by Tork (3914) Subscriber Badge on Wednesday September 29 2021, @04:56PM (#1182815)
        Umm... R-rated movies aren't made with real violence... "Liam Neeson passed away today during the filming of A-Team 2, we shall see how much we'll mourn his passing after the box-office figures come in."
        --
        🏳️‍🌈 Proud Ally 🏳️‍🌈
        • (Score: 2, Touché) by khallow on Wednesday September 29 2021, @05:21PM (4 children)

          by khallow (3766) Subscriber Badge on Wednesday September 29 2021, @05:21PM (#1182831) Journal

          R-rated movies aren't made with real violence...

          Neither are internets.

          • (Score: 2) by Tork on Wednesday September 29 2021, @05:31PM

            by Tork (3914) Subscriber Badge on Wednesday September 29 2021, @05:31PM (#1182837)
            Mmm hmm. Welp if it's all virtual then there's nothing to bitch about.
            --
            🏳️‍🌈 Proud Ally 🏳️‍🌈
          • (Score: 0) by Anonymous Coward on Wednesday September 29 2021, @09:56PM (2 children)

            by Anonymous Coward on Wednesday September 29 2021, @09:56PM (#1182941)

            r/watchpeopledie among others would like to have a word with you.

            • (Score: 0) by Anonymous Coward on Wednesday September 29 2021, @10:05PM (1 child)

              by Anonymous Coward on Wednesday September 29 2021, @10:05PM (#1182943)

              I'd love to, but they got banned by Karens.

              • (Score: -1, Flamebait) by Mockingbird on Thursday September 30 2021, @08:06AM

                by Mockingbird (15239) on Thursday September 30 2021, @08:06AM (#1183063) Journal

                Karens are Republicans, concerned about People of Color showing up in "their" neighborhood. We would like to talk to your Karen manager.

      • (Score: 2) by shortscreen on Wednesday September 29 2021, @06:56PM

        by shortscreen (2252) on Wednesday September 29 2021, @06:56PM (#1182874) Journal

        If it was a good reason then the neocon warmonger crowd would also be kicked off.

    • (Score: 0, Insightful) by Anonymous Coward on Wednesday September 29 2021, @07:25PM (4 children)

      by Anonymous Coward on Wednesday September 29 2021, @07:25PM (#1182886)

      Nope. Daily Stormer was banished from everything the progs could lay hands on for making fun of Heather Heyer dying from being too fat for the paramedics to resuscitate.

      Gab and Parler were deplatformed for refusing to give the ADL the banhammer like everyone else had.

      YouTube, today, banned all further debate about the safety and effectiveness of all vaccines for all time. It will of course get extended to all discussion of all "approved" drugs. So regardless of what you think about any particular one, there WILL be another bad one, but next time nobody will be allowed to raise the alarm.

      You on the left are the censors. You ban people for disagreeing with you. You are the bad guys. Own it.

      • (Score: 3, Touché) by DeathMonkey on Wednesday September 29 2021, @08:03PM (2 children)

        by DeathMonkey (1380) on Wednesday September 29 2021, @08:03PM (#1182902) Journal

        Daily Stormer, Gab and Parler are all online and active right now.

        How is that possible if the evil left banished them from the internet?

        • (Score: 1, Insightful) by Anonymous Coward on Wednesday September 29 2021, @08:32PM

          by Anonymous Coward on Wednesday September 29 2021, @08:32PM (#1182907)

          Dailystormer never got their domain name back, they rotate through domains on a regular basis as they continue to be deplatformed.
          Gab built their own infrastructure from the ground up, from fiber, an ASN, bare metal in racks.
          Parler bent the knee to the ADL and was allowed back online.

        • (Score: 0) by Anonymous Coward on Wednesday September 29 2021, @08:50PM

          by Anonymous Coward on Wednesday September 29 2021, @08:50PM (#1182913)

          "Daily Stormer was banished from everything the progs could lay hands on..." != "...the evil left banished them from the internet?"

          Reed mor bettuh

      • (Score: 2) by Tork on Wednesday September 29 2021, @09:18PM

        by Tork (3914) Subscriber Badge on Wednesday September 29 2021, @09:18PM (#1182926)

        It's also worth mentioning that whenever they talk about being 'de-platformed' they ALWAYS omit the reason...

        Gab and Parler were deplatformed for refusing to give the ADL the banhammer like everyone else had.

        Daily Stormer was banished from everything the progs could lay hands on for making fun of Heather Heyer...

        YouTube, today, banned all further debate about the safety and effectiveness of all vaccines for all time.

        0 in 4. Thanks for illustrating my point.

        --
        🏳️‍🌈 Proud Ally 🏳️‍🌈
  • (Score: 2, Troll) by NPC-131072 on Wednesday September 29 2021, @03:59PM (8 children)

    by NPC-131072 (7144) on Wednesday September 29 2021, @03:59PM (#1182796) Journal

    And section 230. [reclaimthenet.org]

    • (Score: 2) by DeathMonkey on Wednesday September 29 2021, @04:17PM (7 children)

      by DeathMonkey (1380) on Wednesday September 29 2021, @04:17PM (#1182802) Journal

      Yep, we understand that section 230 gives websites the EXPLICIT right to remove whatever they find "otherwise objectionable" from their services.

      The title of the damn thing is "Protection for private blocking and screening of offensive material."

      47 U.S. Code § 230 - Protection for private blocking and screening of offensive material [cornell.edu]

      Basic reading skills are all that are necessary achieve this intellectual superiority.

      • (Score: 1, Troll) by NPC-131072 on Wednesday September 29 2021, @04:38PM (2 children)

        by NPC-131072 (7144) on Wednesday September 29 2021, @04:38PM (#1182808) Journal

        whatever they find "otherwise objectionable"

        And by "they" we mean "we". [jonathanturley.org]

        • (Score: -1, Troll) by Anonymous Coward on Wednesday September 29 2021, @04:52PM

          by Anonymous Coward on Wednesday September 29 2021, @04:52PM (#1182813)

          Poor rightwing troll :^(

        • (Score: 2) by Runaway1956 on Thursday September 30 2021, @04:55AM

          by Runaway1956 (2926) Subscriber Badge on Thursday September 30 2021, @04:55AM (#1183028) Journal

          But, censorship is only a bad thing when "we" can't control it. I think that's year two in Marxism classes.

      • (Score: 0) by Anonymous Coward on Wednesday September 29 2021, @09:04PM (3 children)

        by Anonymous Coward on Wednesday September 29 2021, @09:04PM (#1182919)

        Not so fast. Details matter. The whole point was to give hosts editorial access without making them specifically responsible for the content created by others - a plausible necessity for a participatory web.

        The problem is that this has a lot to do with communicators who might actually have no idea whatsoever what encrypted content is flowing through their pipe, and not just whether or not Facebook can keep nudies off their site.

        • (Score: 0) by Anonymous Coward on Wednesday September 29 2021, @09:38PM (2 children)

          by Anonymous Coward on Wednesday September 29 2021, @09:38PM (#1182930)

          Does the post office need to know what you wrote in that letter you sent to your friend?

          • (Score: 0) by Anonymous Coward on Thursday September 30 2021, @03:19PM

            by Anonymous Coward on Thursday September 30 2021, @03:19PM (#1183112)

            Not relevant. The post office isn't acting as a publisher. Neither is a dumb pipes provider with a million miles of dark fibre.

            Next idea, then?

          • (Score: 2) by Tork on Friday October 01 2021, @02:50AM

            by Tork (3914) Subscriber Badge on Friday October 01 2021, @02:50AM (#1183262)
            You should look up what you can't send through the post office some time.
            --
            🏳️‍🌈 Proud Ally 🏳️‍🌈
  • (Score: 2) by slinches on Wednesday September 29 2021, @07:29PM (5 children)

    by slinches (5049) on Wednesday September 29 2021, @07:29PM (#1182890)

    And which of those two things is facebook? Most of their content comes from end users, who are clearly content providers for their own posts. Should curation and editorialization of that user content not make them providers as well?

    To clarify my point, who in your mind is responsible for the content of the messages if a publisher passes on carefully selected passages from a publicly available random text generator? Is that similar to what facebook is doing? If not, why not?

    • (Score: 0) by Anonymous Coward on Thursday September 30 2021, @12:10AM

      by Anonymous Coward on Thursday September 30 2021, @12:10AM (#1182983)

      It should place them in the same category as the letters to the editor page on the local newspaper. They decide what is and isn't allowed on their servers, but that should come with the risk of being held liable if they are allowing illegal content on or doing so in such a way as to discriminate. Removing questionable content about vaccines is one thing, but if they aren't removing similarly questionable material from the left, they should be held accountable for it. And really, they shouldn't be removing either set of things until there are viable alternatives for content distribution on a significant scale.

      They should be limiting it to things that are illegal under criminal or civil rules and not much else.

    • (Score: 3, Insightful) by Thexalon on Thursday September 30 2021, @02:24AM (3 children)

      by Thexalon (636) on Thursday September 30 2021, @02:24AM (#1183011)

      I'm going to present a somewhat more concrete example of this: Alice, who is a minor, goes onto social media platform X and says "Tomorrow at school, everybody throw rocks at Bob." A bunch of kids all decide to actually throw rocks at Bob the next day, and Bob receives significant injuries because some of those rocks were heavy.

      The two legal questions are:
      1. Who should be charged with (possibly felony due to the use of a weapon) assault?
      - I think we can all easily agree that everybody who threw rocks at Bob assaulted him.
      - Alice is legally in an interesting spot as far as her criminal intent: A prosecutor could argue Alice solicited the assault or incited the violence, but on the other hand she could successfully argue that she was using a bit of hyperbole and didn't expect anybody to actually throw rocks.
      - As for X, Inc's criminal liability, you'd have to prove that somebody working for X even knew about the contents of the post and made sure the rock-throwers saw it, because criminal liability of this sort requires a showing of intent rather than negligence, so they're probably off the hook for the assault charge.

      2. Who should be civilly liable for any ensuing medical bills and emotional distress?
      - Once again, the people who threw the rocks should be in trouble, no question, although they're probably kids too, so they don't have much by way of assets.
      - It's easier to go after Alice than with the criminal case: Someone suing her is now in a position to argue that she should have known that her post could potentially lead to people actually throwing rocks, and thus she was negligent in her duty to be responsible.
      - The way the law currently works, X, Inc is off the hook for this. The reason probably has to do with the fact that you'd be demanding a human review every post, or demanding that X has to have an algorithm sophisticated enough to know and appreciate the difference between "Throw rocks at Bob" (as a threat to a real person Bob) and "Bob, throw me the rock" (as part of a story about basketball) and "Throw rocks at Bob" (as part of a video game walkthrough), and this post using the threat to a fictional Bob as a hypothetical example to explain a point. And to make things more complicated, note that none of the words I used, on their own, are really triggering of attention by an algorithm: Lots of people are named Bob, there are lots of perfectly legitimate non-criminal reasons to throw things, and lots of non-criminal things you can say about rocks. But on the other hand, X Inc probably isn't as judgment-proof as the rest of them, Bob should be able to collect from somebody, and it's true that their software ideally didn't show that post to the rock-throwers.

      So I'm going with: It's necessarily complicated, there's no easy blanket answer, and reasonable people can disagree on what the right answer is because it's hard for humans, much less designing an algorithm in advance, to get things right in even this example.

      And I'm also not going to forget that the push to make the platform liable for Alice's post came right about the time that some prominent politicians noticed that Alice could post on social media that said politician sucked, and the politicians were frustrated they couldn't sue either Alice or the social media platform for that comment and have a snowball's chance of winning. I hope we can agree that criticizing political officeholders, especially for their official or public acts, without fear of criminal or civil liability is a fundamental right in any country that could reasonably consider itself "free".

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by slinches on Thursday September 30 2021, @08:59PM (2 children)

        by slinches (5049) on Thursday September 30 2021, @08:59PM (#1183194)

        - The way the law currently works, X, Inc is off the hook for this. The reason probably has to do with the fact that you'd be demanding a human review every post, or demanding that X has to have an algorithm sophisticated enough to know and appreciate the difference between "Throw rocks at Bob" (as a threat to a real person Bob) and "Bob, throw me the rock" (as part of a story about basketball) and "Throw rocks at Bob" (as part of a video game walkthrough), and this post using the threat to a fictional Bob as a hypothetical example to explain a point. And to make things more complicated, note that none of the words I used, on their own, are really triggering of attention by an algorithm: Lots of people are named Bob, there are lots of perfectly legitimate non-criminal reasons to throw things, and lots of non-criminal things you can say about rocks. But on the other hand, X Inc probably isn't as judgment-proof as the rest of them, Bob should be able to collect from somebody, and it's true that their software ideally didn't show that post to the rock-throwers.

        So I'm going with: It's necessarily complicated, there's no easy blanket answer, and reasonable people can disagree on what the right answer is because it's hard for humans, much less designing an algorithm in advance, to get things right in even this example.

        I think it's pretty simple. If the site has an algorithm sophisticated enough (or perform the human review) to be able to distinguish between different messages and make editorial decisions on the content they want to promote or suppress, then they should also be responsible for detecting and removing the content that is harmful. However, if they are either not capable or choose to provide an open space for public discourse (i.e. don't exercise editorial control over content posted) they should be protected from liability.

        • (Score: 2) by Thexalon on Thursday September 30 2021, @09:51PM (1 child)

          by Thexalon (636) on Thursday September 30 2021, @09:51PM (#1183201)

          That's not simple at all: You now have to evaluate, in a court of law, exactly how sophisticated the algorithm is, and in order to do that you have to force the disclosure of trade secrets. And then there's going to be the arguing over whether the company was negligent because they could have made the algorithm smart enough to figure this out but didn't because they didn't care about Bob or other victims.

          And of course no site is going to perform the human review without legal cause for doing so, since it's expensive and time-consuming.

          --
          The only thing that stops a bad guy with a compiler is a good guy with a compiler.
          • (Score: 2) by slinches on Friday October 01 2021, @12:40AM

            by slinches (5049) on Friday October 01 2021, @12:40AM (#1183247)

            All that needs to be evaluated is whether there's editorial control, the method is not important. If they pick and choose content then they should own the liability for all of it.