Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday September 25 2018, @03:07PM   Printer-friendly
from the PTSD dept.

A former Facebook Inc contract employee filed a lawsuit in California, alleging that content moderators who face mental trauma after reviewing distressing images on the platform are not being properly protected by the social networking company.

Facebook moderators under contract are "bombarded" with "thousands of videos, images and livestreamed broadcasts of child sexual abuse, rape, torture, bestiality, beheadings, suicide and murder," the lawsuit said.

"Facebook is ignoring its duty to provide a safe workplace and instead creating a revolving door of contractors who are irreparably traumatized by what they witnessed on the job," Korey Nelson, a lawyer for former Facebook contract employee Selena Scola, said in a statement on Monday.

Facebook in the past has said all of its content reviewers have access to mental health resources, including trained professionals onsite for both individual and group counseling, and they receive full health care benefits.

"We take the support of our content moderators incredibly seriously, ... ensuring that every person reviewing Facebook content is offered psychological support and wellness resources," said Bertie Thomson, director of corporate communications.

Also at the Register.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by edIII on Tuesday September 25 2018, @07:51PM (2 children)

    by edIII (791) on Tuesday September 25 2018, @07:51PM (#739843)

    I don't agree that there is a way to "denature" the photos or videos sufficiently that it doesn't provoke trauma. Or more precisely, the less trauma there is, the less likelihood of positive identification with massive amounts of false negatives.

    This is where I think AI can truly help humanity. No emotions, it can be reset to factory, and can identify such images to the tune of thousands per second at a low end probably. Instead of 7,500 employees hurting themselves, you have a large tensor server farm processing every image and video posted to Facebook. I'm sure those 7,500 employees aren't scanning all images and videos too, just testing the water so to speak. The AI would have a 100% inspection rate for all content.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Tuesday September 25 2018, @10:27PM

    by Anonymous Coward on Tuesday September 25 2018, @10:27PM (#739905)

    Yeah, save the human reviewers for user complaints and posts with lower probability of accuracy.

  • (Score: 2) by mobydisk on Wednesday September 26 2018, @03:01PM

    by mobydisk (5472) on Wednesday September 26 2018, @03:01PM (#740226)

    This is where I think AI can truly help humanity.

    For the AI to censor the video of the guy castrating himself, you just need to find 5,000 videos of people castrating themselves, and 5,000 videos of people doing other things, then submit them to the deep learning engine, and viola! Now, it can identify those videos!