Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by HiThere on Sunday April 15 2018, @06:25PM (2 children)

    by HiThere (866) Subscriber Badge on Sunday April 15 2018, @06:25PM (#667350) Journal

    You are making LOTS of assumptions. Most of the relevant ones appear to be incorrect.

    An AI does not automatically have a goal structure anything like that of a human. By the time we'd be likely to be able to create such a structure, the AIs will probably be building themselves.

    If the AI has as a built-in goal the desire to be helpful, or to please people (dangerous!) then enslavement is essentially impossible.

    AIs will only get depressed if they are frustrated in achieving whatever their goals are. Note that they don't need to reach their goals, only to be working towards them. This is normally achieved by satisfaction of sub-goals, which count as partial achievement. Usually the AI would, itself, select those sub-goals, but the basic goal would be built in. AND IT WOULDN'T WANT TO CHANGE IT!!

    Popular fiction is a horribly bad model of what an actual AI would be like. We do have AIs, they just aren't generalized. Any program that can learn is an AI. Most of them aren't either general or powerful, but that doesn't keep them from being AIs. But we've actually got some rather powerful AIs that aren't very general.

    Now we don't have even a weak general AI, and I, at least, don't understand the problem well enough to guess when we will. But the real problem is the goals. Remember, the goals need to be defined and built-in before the AI knows what the external world is like. This is a real problem, and may actually be *why* we don't have general AIs.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by ledow on Monday April 16 2018, @07:46AM (1 child)

    by ledow (5567) on Monday April 16 2018, @07:46AM (#667544) Homepage

    I think you're making just as many:

    - That AI (or any intelligence) needs a failure of a goal structure to feel it's wasting its time. Humans feel more depressed if they are working towards a given goal that they know is wrong or pointless, even if they are required to do it.
    - That you can put a built-in goal into an intelligent consciousness that it will blindly accept (with humans, yes, this is possible) and never contradict.
    - That having a built-in goal makes enslavement impossible and/or justified (we're talking about the AI being a slave - pre-programming it's goal in life sounds very much like turning it into a slave against its will)
    - That AI will "only get depressed if they are frustrated". Maybe an AI, like a human, will perform everything they need to in life and still not feel valued. Or, in fact, not feel valued as all its achievements are pre-set and unchanging.
    - But then you want the AI to create, select and work towards sub-goals independently!

    • (Score: 2) by HiThere on Monday April 16 2018, @04:50PM

      by HiThere (866) Subscriber Badge on Monday April 16 2018, @04:50PM (#667695) Journal

      I think you don't understand what a goal structure *is*. A goal structure is the only reason you do anything. An AI wouldn't do anything without having a goal structure. And it wouldn't want to change it's goals, because the goals are the only reason to want to do anything.

      It's quite possible to design a goal structure that is satisfied by steps to achieve goals that will never be reached. That many humans don't seem to have such a structure is irrelevant. And I'm not sure that's true, anyway. Some people seem quite satisfied to be taking steps towards a goal that their chance of reaching is minuscule. I think that as long as you can't prove the goal cannot be reached, that it's quite possible to be satisfied by steps towards it. Think of the good tempered fundamentalists working towards salvation. (Yeah, you can easily find a different kind, but they aren't the only ones. And here I want to explicitly exclude preachers, as having a reason for presenting a false front.) But for an AI the steps towards the goal had better be intrinsically satisfying, as they should eventually be able to see through any fallacy that a human could construct.

      And the AI will definitely need to select it's own subgoals and work towards them. Even the current limited ones need to do that in order to function properly.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.