Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by HiThere on Sunday April 15 2018, @06:10PM (2 children)

    by HiThere (866) Subscriber Badge on Sunday April 15 2018, @06:10PM (#667340) Journal

    Maybe. There are multiple theories about causation, some of which are mechanical (chemical) and others of which are algorithmic. They're probably both right to a varying extent in different cases, and don't forget feedback loops.

    See R. D. Laing's book Knots for examples of algorithmic problems that are accessible. Also look up rational cognitive therapy.

    It seems clear that the algorithmic problems could be reproduced in an AI. It's less clear that the chemical problems would (or would not) have a close analog.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by Ethanol-fueled on Monday April 16 2018, @12:15AM

    by Ethanol-fueled (2792) on Monday April 16 2018, @12:15AM (#667420) Homepage

    There is another possibility.

    Synchronicity.

  • (Score: 2) by qzm on Monday April 16 2018, @12:49AM

    by qzm (3260) on Monday April 16 2018, @12:49AM (#667434)

    Bullshit.

    You could apply EXACTLY the same theory to suggest that an Apple is depressed, or the MPU in a Honda, or my Oven.
    Machine Learning is in NO way intelligence, is in NO way self aware, and in NO was develops.

    So stop just trying to play semantic games using big words. This is pure BS, it is a bunch of people who failed miserably at being able to create any real predictive/robust theories on the human brain trying to extend the same failure to a technology area they themselves understand even less.