Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Monday April 16 2018, @12:25AM (3 children)

    by khallow (3766) Subscriber Badge on Monday April 16 2018, @12:25AM (#667424) Journal

    hallucinations or depression

    The first thing would be to describe what one means by that in terms of AI. That's feasible. For example, hallucinations would merely be input that the computer perceives as something very different. That already is a problem, such as tweaking an image slightly so an object detection algorithm triggers a false positive on the image (for example, inserting a minute dog-like image so that the object detection detects a dog).

    Depression is a harder thing to define. Perhaps some measure of motivation. It's one thing to not trigger on a dog image because the algorithm can't see the dog. It's another to not trigger because the program just isn't responding to that level of input.

  • (Score: 2) by maxwell demon on Monday April 16 2018, @04:59AM (2 children)

    by maxwell demon (1608) on Monday April 16 2018, @04:59AM (#667514) Journal

    For example, hallucinations would merely be input that the computer perceives as something very different. That already is a problem, such as tweaking an image slightly so an object detection algorithm triggers a false positive on the image (for example, inserting a minute dog-like image so that the object detection detects a dog).

    What you are describing is an optical illusion. I don't know about you, but I don't start hallucinating as soon as I see an optical illusion.

    Hallucinations are perceptions that come from an internal feedback loop out of control. Note the ling at the end of the summary; now imagine that a feedback loop like this were part of the standard neural network (as opposed of manually feeding in), in order to improve the ability to detect things. There would be additional network parts that detect algorithm artifacts and mark them as not real. If that additional network part failed to do its job, the result could well be seen as hallucinations: The perception network producing images that are not there (this essentially being demonstrated in that linked article), and the evaluation network failing to classify those as artifacts.

    Depression is a harder thing to define. Perhaps some measure of motivation.

    While depression tends to result in low motivation, not everyone who lacks motivation is depressed. Rather, the motivation system would work on the question: Can a significant improvement of (some variable) be achieved by doing it? There are two possibilities of why the answer would be no: Either, the parameter is already at its optimum, or close enough that it could not be improved without unreasonable effort. Or the parameter is far from the optimum, but the action would not do anything to improve it.

    Depression would be a situation where the motivation system consistently marks the situation as bad, and any possible actions as futile. On the other hand, an AI whose motivation system concludes (rightly or wrongly) that everything is OK, and therefore no action is needed, would not be depressed.

    So to get to your example:

    It's one thing to not trigger on a dog image because the algorithm can't see the dog. It's another to not trigger because the program just isn't responding to that level of input.

    Depressed AI: "I won't see anything in that image anyway, so why try? It's all futile anyway!"
    Unmotivated AI: "Sure, I could look at that image, and probably I'd find something there, but why?"
    (And yes, those might not actually be conscious thoughts.)

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 1) by khallow on Monday April 16 2018, @05:50AM

      by khallow (3766) Subscriber Badge on Monday April 16 2018, @05:50AM (#667522) Journal

      For example, hallucinations would merely be input that the computer perceives as something very different. That already is a problem, such as tweaking an image slightly so an object detection algorithm triggers a false positive on the image (for example, inserting a minute dog-like image so that the object detection detects a dog).

      What you are describing is an optical illusion. I don't know about you, but I don't start hallucinating as soon as I see an optical illusion.

      I don't make the distinction between optical illusions and "TV Jesus told me to fly off this building" because that's making statements about the internals of the brain. Second, the above example is a pretty serious failure for an optical illusion. It's taking a normal view with a small part of the field of view altered and completely changing what the algorithm sees in the image. Third, supposedly someone has come up with a visual effect that can cause normal people to hallucinate [ibtimes.co.uk] in a consistent way to a modest degree. These may well be related.

      Depression would be a situation where the motivation system consistently marks the situation as bad, and any possible actions as futile. On the other hand, an AI whose motivation system concludes (rightly or wrongly) that everything is OK, and therefore no action is needed, would not be depressed.

      Sounds good to me though it may miss some forms of depression, procrastination-derived depression, for example.

    • (Score: 0) by Anonymous Coward on Monday April 16 2018, @06:39PM

      by Anonymous Coward on Monday April 16 2018, @06:39PM (#667752)

      I find it likely that any "hallucinations" an AI would experience would come from the same sources they do for humans - hardware problems, and bad input (drugs). If it's possible to create input that causes an AI to feel it's goals are accomplished worth minimal effort, are you sure it won't just take a lot of bad input and let it's circuits idle? Another AI might have a defect in the hardware causing intermittent errors or develop bad routines that aren't immediately obvious.