Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Monday April 16 2018, @04:50PM

    by HiThere (866) Subscriber Badge on Monday April 16 2018, @04:50PM (#667695) Journal

    I think you don't understand what a goal structure *is*. A goal structure is the only reason you do anything. An AI wouldn't do anything without having a goal structure. And it wouldn't want to change it's goals, because the goals are the only reason to want to do anything.

    It's quite possible to design a goal structure that is satisfied by steps to achieve goals that will never be reached. That many humans don't seem to have such a structure is irrelevant. And I'm not sure that's true, anyway. Some people seem quite satisfied to be taking steps towards a goal that their chance of reaching is minuscule. I think that as long as you can't prove the goal cannot be reached, that it's quite possible to be satisfied by steps towards it. Think of the good tempered fundamentalists working towards salvation. (Yeah, you can easily find a different kind, but they aren't the only ones. And here I want to explicitly exclude preachers, as having a reason for presenting a false front.) But for an AI the steps towards the goal had better be intrinsically satisfying, as they should eventually be able to see through any fallacy that a human could construct.

    And the AI will definitely need to select it's own subgoals and work towards them. Even the current limited ones need to do that in order to function properly.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2