Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday November 20 2018, @06:25AM   Printer-friendly
from the is-this-good-or-bad? dept.

Submitted via IRC for takyon

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

A year ago, Facebook started using artificial intelligence to scan people's accounts for danger signs of imminent self-harm.

[...] "To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)

Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people's replies.

[...] "Ever since they've introduced livestreaming on their platform, they've had a real problem with people livestreaming suicides," Marks says. "Facebook has a real interest in stopping that."

He isn't sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI's accuracy rate. How many of those 3,500 "wellness checks" turned out to be actual emergencies? The company isn't saying.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by tangomargarine on Tuesday November 20 2018, @04:36PM

    by tangomargarine (667) on Tuesday November 20 2018, @04:36PM (#764277)

    This whole "how should they go about trying to prevent suicides" thing presupposes that they should go about trying to prevent suicides in the first place. I don't accept that this is facebook's job.

    As a general rule, I imagine the majority here on SN when asked "should facebook do X with their data," the answer would be "fuck no."

    Humorously, there's a periodic occurrence on the Magic: The Gathering subreddits where a reddit bot sees somebody posting mentioning a Magic card with "suicide" in the name and posts some concerned drivel about "help is out there; click this link."

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3