Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday November 20 2018, @06:25AM   Printer-friendly
from the is-this-good-or-bad? dept.

Submitted via IRC for takyon

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

A year ago, Facebook started using artificial intelligence to scan people's accounts for danger signs of imminent self-harm.

[...] "To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)

Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people's replies.

[...] "Ever since they've introduced livestreaming on their platform, they've had a real problem with people livestreaming suicides," Marks says. "Facebook has a real interest in stopping that."

He isn't sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI's accuracy rate. How many of those 3,500 "wellness checks" turned out to be actual emergencies? The company isn't saying.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Anonymous Coward on Tuesday November 20 2018, @03:11PM (1 child)

    by Anonymous Coward on Tuesday November 20 2018, @03:11PM (#764251)

    Often very large employers do their own life insurance actuarial tables against their own employee data. Then they buy life insurance on the employees who they think are most likely to die. Generally their data is more accurate than the insurers, and they make money on it. The insurers loss is externalized to the shlubs who buy policies on themselves.

    Think about what FB is doing, and then ask yourself: "Is what they are doing philanthropic, or about gaming the life insurance industry?".

    The idea that this is being used for philanthropy is niave. If they can do it for philanthropic purposes (inflicting welfare) then they can do it to inflict loss can they not? And from which is profit more easily realized?

    What the article is talking about is the use of an algo to do psychological analysis and provide an applied therapy without the consent. Also without a license to practice, and without doctor patient privelege. Certainly it is unlawful. Certainly it is only possible because of an invasion of peoples privacy in violation of their indivisable human rights.

    The scenario I find most likely is they have done the math. And that they possess internally, statistical evidence that their business model causes people psychological harm. Then in a vain attempt to offset the problem. They did this to try and keep people just on the edge of blowing their brains out instead of actually pulling the trigger. IOW the project may be about loss prevention against future litigation. Again, the engineers would be kept in the dark.

    IMHO, what they've just created an API that somebody is eventually going to use to commit premeditated homicide via psychological battery. The purpose therefore, is to game the life insurance industry. This idea that this is lawful is quite insane.

    Starting Score:    0  points
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  

    Total Score:   3  
  • (Score: 3, Interesting) by Knowledge Troll on Tuesday November 20 2018, @04:17PM

    by Knowledge Troll (5948) on Tuesday November 20 2018, @04:17PM (#764268) Homepage Journal

    Did you notice they define success as the number of outputs the system has generated and refuse to provide data on any kind of accuracy rate? This's thing would need a false positive rate near zero if it is to be tolerated at all and that just isn't realistic.

    Maybe they are playing a bigger game, I dunno, and while I found your post interesting to chew on it looks to me like they are just trying to say they are doing something. Lots of something. Never mind the something can cause great harm in the event of false positives.