Submitted via IRC for takyon
Facebook Increasingly Reliant on A.I. To Predict Suicide Risk
A year ago, Facebook started using artificial intelligence to scan people's accounts for danger signs of imminent self-harm.
[...] "To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)
Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people's replies.
[...] "Ever since they've introduced livestreaming on their platform, they've had a real problem with people livestreaming suicides," Marks says. "Facebook has a real interest in stopping that."
He isn't sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI's accuracy rate. How many of those 3,500 "wellness checks" turned out to be actual emergencies? The company isn't saying.
(Score: 0) by Anonymous Coward on Tuesday November 20 2018, @11:05PM
Wouldn't it be great to be paid to troll Facebook users all day? Trawl, on the other hand...