Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by SomeGuy on Monday April 16 2018, @03:32PM

    by SomeGuy (5632) on Monday April 16 2018, @03:32PM (#667660)

    There seems to be a strange amount of disagreement here. First of all, that line was from the comedy movie "Short Circuit". It was supposed to be funny. (Perhaps you are an IA and can't laugh at my lame joke? :) )

    There is the very real problem that human emotions are the result of complex bio-chemical reactions. Can it be emulated on a silicon chip? Given enough processing and electrical power, sure. But it is still emulation. Ask any MAME user, even emulating silicon on silicon loses something. And then there is the bigger problem: What practical purpose does that serve?. There may be a few narrow niche answers such as better understanding the human condition, but I hope no one would want to ride in a self driving car that genuinely, consciously hates them.

    Then there is the more general problem with "AI": What it "learns" is potentially garbage that just happens to do what is expected. There was an interesting Isaac Asimov short story entitled "Reason" that illustrates this quite well. In this story robots are in charge of a power source that could destroy the entire Earth. The robots malfunction and develop a religion around what they do, yet in the end they appear to perform their job perfectly even though they do everything only because of their religion.

    Even if it works, should you trust it? How does it behave when the unexpected happens? What about edge cases that weren't explicitly tested for? Can you be sure it will behave consistently in all circumstances?

    All of that still doesn't change the fact that the computers in use today revolve around the classic Von Neumann architecture. If there are any "neuromorphic architecture" computers, or such, in production anywhere, please post factual details. I very well may have missed the memo. But it is pointless to speculate what people like the military may be using in secret. They MAY be using alien technology too, you can't prove otherwise.

    Which actually brings me to another issue. Emotions are very human-specific. They have evolved over billions of years and are share by some, but not all, animals on this planet. An alien species would likely have a completely different set of "emotions". They may not be able to laugh or cry, but could still be intelligent and even sentient. The point is, emotions are not necessarily needed, and the desire to place such emotions on computers is simply anthropomorphizing.

    Specifically, depression exists in animals as an indirect way to eliminate underperformed members. Those members that can not meet their goals such as collecting food or reproducing may become "depressed", lethargic or slowing down enabling predators to more easily eliminate them, or encourage seeking out more risky activities such as more dangerous paths to collect food. The trait is a group trait and therefore passed on by the surviving group that benefits from the removal of the individual. What would be the logic of emulating this in program code? I would think there would be much more efficient direct algorithms.

    Anyway, "AI" has been a marketing buzzword for a very, very long time now. Yet, like flying cars, it has yet to deliver anything meaningful. It always takes time for younger generations to become callous toward such buzzwords, so this idea will continue to get thrown around. Of course, you could just sell an empty box with the letters "AI" printed on the front with a bunch of flashing blue LEDs and most idiots would happily buy it.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3