Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Gaaark on Sunday April 15 2018, @01:54PM (12 children)

    by Gaaark (41) on Sunday April 15 2018, @01:54PM (#667266) Journal

    I'm calling shenanigans.

    Program badly, maybe, and you'll see programmed problems, but not the same.

    Bullshit! I call bullshit! Computational psychiatry my ass. Human biologic dispenser.

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Insightful) by SomeGuy on Sunday April 15 2018, @03:51PM (8 children)

    by SomeGuy (5632) on Sunday April 15 2018, @03:51PM (#667286)

    Exactly, machines are still machines. Any human qualities they appear to have are simply programmed. They don't think like humans and will never have the same emotions as humans simply because they are not human.

    Machines don't get happy, they don't get sad, they don't get angry, they don't laugh at your lame jokes; THEY JUST RUN PROGRAMS!

    While it is fun for nerds to think about, this kind of speculative "news" is the sort of stuff that puts unrealistic expectations in average people's heads about what machines can do.

    • (Score: 0, Touché) by Anonymous Coward on Sunday April 15 2018, @04:26PM (1 child)

      by Anonymous Coward on Sunday April 15 2018, @04:26PM (#667299)

      Gee, what an obtuse response. It's like you've never heard of neuromorphic architectures or brain emulation.

      What are you again? A biological machine with a meatbag superiority complex. Consciousness will be replicated in a decade or two, assuming the military hasn't already done it.

      • (Score: 3, Insightful) by SomeGuy on Monday April 16 2018, @03:32PM

        by SomeGuy (5632) on Monday April 16 2018, @03:32PM (#667660)

        There seems to be a strange amount of disagreement here. First of all, that line was from the comedy movie "Short Circuit". It was supposed to be funny. (Perhaps you are an IA and can't laugh at my lame joke? :) )

        There is the very real problem that human emotions are the result of complex bio-chemical reactions. Can it be emulated on a silicon chip? Given enough processing and electrical power, sure. But it is still emulation. Ask any MAME user, even emulating silicon on silicon loses something. And then there is the bigger problem: What practical purpose does that serve?. There may be a few narrow niche answers such as better understanding the human condition, but I hope no one would want to ride in a self driving car that genuinely, consciously hates them.

        Then there is the more general problem with "AI": What it "learns" is potentially garbage that just happens to do what is expected. There was an interesting Isaac Asimov short story entitled "Reason" that illustrates this quite well. In this story robots are in charge of a power source that could destroy the entire Earth. The robots malfunction and develop a religion around what they do, yet in the end they appear to perform their job perfectly even though they do everything only because of their religion.

        Even if it works, should you trust it? How does it behave when the unexpected happens? What about edge cases that weren't explicitly tested for? Can you be sure it will behave consistently in all circumstances?

        All of that still doesn't change the fact that the computers in use today revolve around the classic Von Neumann architecture. If there are any "neuromorphic architecture" computers, or such, in production anywhere, please post factual details. I very well may have missed the memo. But it is pointless to speculate what people like the military may be using in secret. They MAY be using alien technology too, you can't prove otherwise.

        Which actually brings me to another issue. Emotions are very human-specific. They have evolved over billions of years and are share by some, but not all, animals on this planet. An alien species would likely have a completely different set of "emotions". They may not be able to laugh or cry, but could still be intelligent and even sentient. The point is, emotions are not necessarily needed, and the desire to place such emotions on computers is simply anthropomorphizing.

        Specifically, depression exists in animals as an indirect way to eliminate underperformed members. Those members that can not meet their goals such as collecting food or reproducing may become "depressed", lethargic or slowing down enabling predators to more easily eliminate them, or encourage seeking out more risky activities such as more dangerous paths to collect food. The trait is a group trait and therefore passed on by the surviving group that benefits from the removal of the individual. What would be the logic of emulating this in program code? I would think there would be much more efficient direct algorithms.

        Anyway, "AI" has been a marketing buzzword for a very, very long time now. Yet, like flying cars, it has yet to deliver anything meaningful. It always takes time for younger generations to become callous toward such buzzwords, so this idea will continue to get thrown around. Of course, you could just sell an empty box with the letters "AI" printed on the front with a bunch of flashing blue LEDs and most idiots would happily buy it.

    • (Score: 3, Funny) by cellocgw on Sunday April 15 2018, @04:42PM

      by cellocgw (4190) on Sunday April 15 2018, @04:42PM (#667306)

      Machines don't get happy, they don't get sad, they don't get angry, they don't laugh at your lame jokes; THEY JUST RUN PROGRAMS!

      Nice, you just hurt my computer's feelings bigly.
      It is sulkiing and won't play with my tablet any more.

      --
      Physicist, cellist, former OTTer (1190) resume: https://app.box.com/witthoftresume
    • (Score: 1, Interesting) by Anonymous Coward on Sunday April 15 2018, @05:00PM (4 children)

      by Anonymous Coward on Sunday April 15 2018, @05:00PM (#667311)

      No, seriously, your response is the basic boilerplate answer for normies who think their x86 chip is like a human brain. But it is not a very rigorous answer and does not take into account new architectures. There's even talk of recurrent neural networks exhibiting real intelligence. If that is anywhere near true, they could probably exhibit something like depression as well.

      Your human brain is a machine. Its functionality can likely be copied using nonbiological components. If it happens, it could be kept a secret for years to maintain a serious competitive or military advantage.

      • (Score: 1, Insightful) by Anonymous Coward on Sunday April 15 2018, @05:48PM (3 children)

        by Anonymous Coward on Sunday April 15 2018, @05:48PM (#667327)

        And YOUR response sounds like a boilerplate answer for for someone who is trying to sell this hocus-pocus IA crap. Do people really needs machines that can feel a genuine sense of satisfaction whenever they do their jobs well? Or get depressed when they don't? Seriously, what does your precious "AI" do that humanity actually NEEDS? All I see it ever used for is a programing shortcut. Have some complex problem? Just throw "AI" at it! Just beat it like a dog until it does what you want 99.99% of the time, but no one has any way to know what it has really "learned" or what it will do in unexpected outlier cases. Oh, sure pedantically one could pull apart and audit every bit, but no one does that. And if they did, it would no longer be "artificial" intelligence.

        There is no substitute for properly engineered, audited program code.

        • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @06:01PM (1 child)

          by Anonymous Coward on Sunday April 15 2018, @06:01PM (#667331)

          NEED? Try WANT. It could be anything from a genius scientist, sleepless artist, to purely slave labor.

          It doesn't need to substitute for your "properly engineered" (yeah right), badly documented program code. It can work alongside it or write code itself.

          • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @08:10PM

            by Anonymous Coward on Sunday April 15 2018, @08:10PM (#667372)

            "It could be anything"

            Spoken like a true brainwashed marketoid. Any product can always do ANYTHING!

        • (Score: 2) by acid andy on Sunday April 15 2018, @06:10PM

          by acid andy (1683) on Sunday April 15 2018, @06:10PM (#667339) Homepage Journal

          Seriously, what does your precious "AI" do that humanity actually NEEDS?

          Eventually, hopefully, it thinks much faster than humans and maybe even with greater intelligence.

          --
          If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
  • (Score: 3, Interesting) by HiThere on Sunday April 15 2018, @06:10PM (2 children)

    by HiThere (866) Subscriber Badge on Sunday April 15 2018, @06:10PM (#667340) Journal

    Maybe. There are multiple theories about causation, some of which are mechanical (chemical) and others of which are algorithmic. They're probably both right to a varying extent in different cases, and don't forget feedback loops.

    See R. D. Laing's book Knots for examples of algorithmic problems that are accessible. Also look up rational cognitive therapy.

    It seems clear that the algorithmic problems could be reproduced in an AI. It's less clear that the chemical problems would (or would not) have a close analog.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 1) by Ethanol-fueled on Monday April 16 2018, @12:15AM

      by Ethanol-fueled (2792) on Monday April 16 2018, @12:15AM (#667420) Homepage

      There is another possibility.

      Synchronicity.

    • (Score: 2) by qzm on Monday April 16 2018, @12:49AM

      by qzm (3260) on Monday April 16 2018, @12:49AM (#667434)

      Bullshit.

      You could apply EXACTLY the same theory to suggest that an Apple is depressed, or the MPU in a Honda, or my Oven.
      Machine Learning is in NO way intelligence, is in NO way self aware, and in NO was develops.

      So stop just trying to play semantic games using big words. This is pure BS, it is a bunch of people who failed miserably at being able to create any real predictive/robust theories on the human brain trying to extend the same failure to a technology area they themselves understand even less.