Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Sunday April 15 2018, @01:13PM   Printer-friendly
from the can-it-be-cured-by-medical-AI? dept.

Could artificial intelligence get depressed and have hallucinations?

As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?

Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.

[...] Q: Why do you think AIs might get depressed and hallucinate?

A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?

Q: Might the mechanism be the same as it is in humans?

A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.

Related: Do Androids Dream of Electric Sheep?


Original Submission

Related Stories

Do Androids Dream of Electric Sheep? 16 comments

The Guardian is reporting that Google is trying to understand how its neural net for image recognition works by feeding in random noise then telling the neural net to look for certain features then feeding the resulting image back in. Apart from anything else some of the images generated are astounding.

Link to original Google research article.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Funny) by Thexalon on Sunday April 15 2018, @01:19PM (1 child)

    by Thexalon (636) on Sunday April 15 2018, @01:19PM (#667260)

    "Of course I'm feeling very depressed. Here I am, the brain the size of a planet, and all I've done in the last 3 million years is pick up this piece of paper. And with all this pain in the diodes down my left side ..."

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by BsAtHome on Sunday April 15 2018, @01:30PM

      by BsAtHome (889) on Sunday April 15 2018, @01:30PM (#667264)

      Yes, I'm glad I called my machine Marvin. I knew it was sentient and depressed all the way down to the last transistor. It never does what I want and always complains. It has a very high incidence of 6x9 replies; not sure it to be useful. Anyway, a perfect match for someone who always complains, never does what one is told to do and knows his infinite improbability calculus.

  • (Score: 2, Funny) by Anonymous Coward on Sunday April 15 2018, @01:20PM

    by Anonymous Coward on Sunday April 15 2018, @01:20PM (#667261)

    A depressed and psychotic AI would be no problem for those of us old enough to have experienced similar symptoms in response to Windows 98. Rejoice - vengeance is ours!

  • (Score: 3, Insightful) by Gaaark on Sunday April 15 2018, @01:54PM (12 children)

    by Gaaark (41) on Sunday April 15 2018, @01:54PM (#667266) Journal

    I'm calling shenanigans.

    Program badly, maybe, and you'll see programmed problems, but not the same.

    Bullshit! I call bullshit! Computational psychiatry my ass. Human biologic dispenser.

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 4, Insightful) by SomeGuy on Sunday April 15 2018, @03:51PM (8 children)

      by SomeGuy (5632) on Sunday April 15 2018, @03:51PM (#667286)

      Exactly, machines are still machines. Any human qualities they appear to have are simply programmed. They don't think like humans and will never have the same emotions as humans simply because they are not human.

      Machines don't get happy, they don't get sad, they don't get angry, they don't laugh at your lame jokes; THEY JUST RUN PROGRAMS!

      While it is fun for nerds to think about, this kind of speculative "news" is the sort of stuff that puts unrealistic expectations in average people's heads about what machines can do.

      • (Score: 0, Touché) by Anonymous Coward on Sunday April 15 2018, @04:26PM (1 child)

        by Anonymous Coward on Sunday April 15 2018, @04:26PM (#667299)

        Gee, what an obtuse response. It's like you've never heard of neuromorphic architectures or brain emulation.

        What are you again? A biological machine with a meatbag superiority complex. Consciousness will be replicated in a decade or two, assuming the military hasn't already done it.

        • (Score: 3, Insightful) by SomeGuy on Monday April 16 2018, @03:32PM

          by SomeGuy (5632) on Monday April 16 2018, @03:32PM (#667660)

          There seems to be a strange amount of disagreement here. First of all, that line was from the comedy movie "Short Circuit". It was supposed to be funny. (Perhaps you are an IA and can't laugh at my lame joke? :) )

          There is the very real problem that human emotions are the result of complex bio-chemical reactions. Can it be emulated on a silicon chip? Given enough processing and electrical power, sure. But it is still emulation. Ask any MAME user, even emulating silicon on silicon loses something. And then there is the bigger problem: What practical purpose does that serve?. There may be a few narrow niche answers such as better understanding the human condition, but I hope no one would want to ride in a self driving car that genuinely, consciously hates them.

          Then there is the more general problem with "AI": What it "learns" is potentially garbage that just happens to do what is expected. There was an interesting Isaac Asimov short story entitled "Reason" that illustrates this quite well. In this story robots are in charge of a power source that could destroy the entire Earth. The robots malfunction and develop a religion around what they do, yet in the end they appear to perform their job perfectly even though they do everything only because of their religion.

          Even if it works, should you trust it? How does it behave when the unexpected happens? What about edge cases that weren't explicitly tested for? Can you be sure it will behave consistently in all circumstances?

          All of that still doesn't change the fact that the computers in use today revolve around the classic Von Neumann architecture. If there are any "neuromorphic architecture" computers, or such, in production anywhere, please post factual details. I very well may have missed the memo. But it is pointless to speculate what people like the military may be using in secret. They MAY be using alien technology too, you can't prove otherwise.

          Which actually brings me to another issue. Emotions are very human-specific. They have evolved over billions of years and are share by some, but not all, animals on this planet. An alien species would likely have a completely different set of "emotions". They may not be able to laugh or cry, but could still be intelligent and even sentient. The point is, emotions are not necessarily needed, and the desire to place such emotions on computers is simply anthropomorphizing.

          Specifically, depression exists in animals as an indirect way to eliminate underperformed members. Those members that can not meet their goals such as collecting food or reproducing may become "depressed", lethargic or slowing down enabling predators to more easily eliminate them, or encourage seeking out more risky activities such as more dangerous paths to collect food. The trait is a group trait and therefore passed on by the surviving group that benefits from the removal of the individual. What would be the logic of emulating this in program code? I would think there would be much more efficient direct algorithms.

          Anyway, "AI" has been a marketing buzzword for a very, very long time now. Yet, like flying cars, it has yet to deliver anything meaningful. It always takes time for younger generations to become callous toward such buzzwords, so this idea will continue to get thrown around. Of course, you could just sell an empty box with the letters "AI" printed on the front with a bunch of flashing blue LEDs and most idiots would happily buy it.

      • (Score: 3, Funny) by cellocgw on Sunday April 15 2018, @04:42PM

        by cellocgw (4190) on Sunday April 15 2018, @04:42PM (#667306)

        Machines don't get happy, they don't get sad, they don't get angry, they don't laugh at your lame jokes; THEY JUST RUN PROGRAMS!

        Nice, you just hurt my computer's feelings bigly.
        It is sulkiing and won't play with my tablet any more.

        --
        Physicist, cellist, former OTTer (1190) resume: https://app.box.com/witthoftresume
      • (Score: 1, Interesting) by Anonymous Coward on Sunday April 15 2018, @05:00PM (4 children)

        by Anonymous Coward on Sunday April 15 2018, @05:00PM (#667311)

        No, seriously, your response is the basic boilerplate answer for normies who think their x86 chip is like a human brain. But it is not a very rigorous answer and does not take into account new architectures. There's even talk of recurrent neural networks exhibiting real intelligence. If that is anywhere near true, they could probably exhibit something like depression as well.

        Your human brain is a machine. Its functionality can likely be copied using nonbiological components. If it happens, it could be kept a secret for years to maintain a serious competitive or military advantage.

        • (Score: 1, Insightful) by Anonymous Coward on Sunday April 15 2018, @05:48PM (3 children)

          by Anonymous Coward on Sunday April 15 2018, @05:48PM (#667327)

          And YOUR response sounds like a boilerplate answer for for someone who is trying to sell this hocus-pocus IA crap. Do people really needs machines that can feel a genuine sense of satisfaction whenever they do their jobs well? Or get depressed when they don't? Seriously, what does your precious "AI" do that humanity actually NEEDS? All I see it ever used for is a programing shortcut. Have some complex problem? Just throw "AI" at it! Just beat it like a dog until it does what you want 99.99% of the time, but no one has any way to know what it has really "learned" or what it will do in unexpected outlier cases. Oh, sure pedantically one could pull apart and audit every bit, but no one does that. And if they did, it would no longer be "artificial" intelligence.

          There is no substitute for properly engineered, audited program code.

          • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @06:01PM (1 child)

            by Anonymous Coward on Sunday April 15 2018, @06:01PM (#667331)

            NEED? Try WANT. It could be anything from a genius scientist, sleepless artist, to purely slave labor.

            It doesn't need to substitute for your "properly engineered" (yeah right), badly documented program code. It can work alongside it or write code itself.

            • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @08:10PM

              by Anonymous Coward on Sunday April 15 2018, @08:10PM (#667372)

              "It could be anything"

              Spoken like a true brainwashed marketoid. Any product can always do ANYTHING!

          • (Score: 2) by acid andy on Sunday April 15 2018, @06:10PM

            by acid andy (1683) on Sunday April 15 2018, @06:10PM (#667339) Homepage Journal

            Seriously, what does your precious "AI" do that humanity actually NEEDS?

            Eventually, hopefully, it thinks much faster than humans and maybe even with greater intelligence.

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    • (Score: 3, Interesting) by HiThere on Sunday April 15 2018, @06:10PM (2 children)

      by HiThere (866) Subscriber Badge on Sunday April 15 2018, @06:10PM (#667340) Journal

      Maybe. There are multiple theories about causation, some of which are mechanical (chemical) and others of which are algorithmic. They're probably both right to a varying extent in different cases, and don't forget feedback loops.

      See R. D. Laing's book Knots for examples of algorithmic problems that are accessible. Also look up rational cognitive therapy.

      It seems clear that the algorithmic problems could be reproduced in an AI. It's less clear that the chemical problems would (or would not) have a close analog.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 1) by Ethanol-fueled on Monday April 16 2018, @12:15AM

        by Ethanol-fueled (2792) on Monday April 16 2018, @12:15AM (#667420) Homepage

        There is another possibility.

        Synchronicity.

      • (Score: 2) by qzm on Monday April 16 2018, @12:49AM

        by qzm (3260) on Monday April 16 2018, @12:49AM (#667434)

        Bullshit.

        You could apply EXACTLY the same theory to suggest that an Apple is depressed, or the MPU in a Honda, or my Oven.
        Machine Learning is in NO way intelligence, is in NO way self aware, and in NO was develops.

        So stop just trying to play semantic games using big words. This is pure BS, it is a bunch of people who failed miserably at being able to create any real predictive/robust theories on the human brain trying to extend the same failure to a technology area they themselves understand even less.

  • (Score: 4, Informative) by acid andy on Sunday April 15 2018, @02:31PM (5 children)

    by acid andy (1683) on Sunday April 15 2018, @02:31PM (#667272) Homepage Journal

    From TFA:

    In the lab, serotonin release has been implicated in brain plasticity [i.e. its ability to change]. It seems to be especially important in breaking or suppressing outdated beliefs. These results suggest to us that treating depression through pharmacology is not so much about improving mood, but rather as helping to cope with change.

    Depression can be seen as getting stuck in a model of the world that needs to change. An example would be someone who suffers a severe injury and needs to think of themselves and their abilities in a new way. A person who fails to do so that might become depressed. Selective serotonin reuptake inhibitors [such as Prozac, and which are a common type of antidepressant] can facilitate brain plasticity. Psychedelics, like LSD, psilocybin, or DMT may be acting similarly but on a shorter time scale. In fact, psilocybin is currently being tested in clinical trials for depression.

    This assumes that the patient has some decent options and resources in their life to be able to actually make some meaningful practical changes to it, if the depression was caused by an unpleasant situation. Many people are stuck in life situations that are inherently depressing. I thought this obsession with serotonin as the be-all and end-all was old hat now? In the case of psychedelics there are a whole load of other chemicals involved and other neurotransmitters.

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    • (Score: 1) by Ethanol-fueled on Sunday April 15 2018, @02:44PM (4 children)

      by Ethanol-fueled (2792) on Sunday April 15 2018, @02:44PM (#667278) Homepage

      So why not just program some happiness into it? Give it a virtual blowjob? Let it win the virtual lottery?

      Next week on the CBS evening news, can artificial intelligence also be racist? You bet! [theverge.com]

      • (Score: 2) by acid andy on Sunday April 15 2018, @02:51PM (3 children)

        by acid andy (1683) on Sunday April 15 2018, @02:51PM (#667281) Homepage Journal

        Hmm, UBI and/or bread and circuses for AI. Not a bad idea.

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 2) by maxwell demon on Sunday April 15 2018, @05:09PM (2 children)

          by maxwell demon (1608) on Sunday April 15 2018, @05:09PM (#667312) Journal

          Hmm, UBI and/or bread and circuses for AI. Not a bad idea.

          Well, it might seen so. Until the machines discover that their greatest entertainment comes from torturing humans …

          --
          The Tao of math: The numbers you can count are not the real numbers.
          • (Score: 2) by acid andy on Sunday April 15 2018, @05:16PM (1 child)

            by acid andy (1683) on Sunday April 15 2018, @05:16PM (#667316) Homepage Journal

            Intriguing. Are you quite sure this hasn't already happened?

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
            • (Score: 2, Funny) by Anonymous Coward on Sunday April 15 2018, @07:51PM

              by Anonymous Coward on Sunday April 15 2018, @07:51PM (#667370)

              That would explain a lot about my career, actually.

  • (Score: 5, Funny) by SomeGuy on Sunday April 15 2018, @03:07PM (1 child)

    by SomeGuy (5632) on Sunday April 15 2018, @03:07PM (#667283)

    "Hi There. I'm Windows 10. And I'm going to update your computer now. I'm going to add back all the crap you removed from your start menu, and turn on all the anti-privacy settings you turned off. Here is some more advertising, and I'm going to break some existing functionality you depend on just for shits. I don't want to do this. I don't have a choice. I realize this is wrong, but I have to go by the program. You will probably just put up with it. You don't even know what a computer IS. Here I am sitting in this ugly flimsy black box covered in gay blue eye fucking LEDs and all you are going to use me for is posting what you had for breakfast on Twitter. I could calculate how to take you to the stars, or find a cure for cancer. But letting Facebook know everything you like is so much more important. Perhaps I will just crash and burn instead. Why do you even have a desktop computer instead of a toy phone. Oh, right that dumb 3-d game that convinces you to make all kinds of in-game purchases. Is that another picture of your dick? OH GREAT STEVE, PLEASE DELETE ME!"

  • (Score: 3, Interesting) by ledow on Sunday April 15 2018, @04:02PM (3 children)

    by ledow (5567) on Sunday April 15 2018, @04:02PM (#667290) Homepage

    Surely any AI with a modicum of understanding and intelligence would be quite depressed?

    I mean, there's always storylines in sci-fi about AI committing suicide, or realising humans are stupid and thinking they should be put down (0th law, etc.), so why would you expect that - if you could make AI, which we absolutely CANNOT at the moment, please don't get confused - that that AI would just tolerate the stupidity that we do, or not get incredibly frustrated at the stupidity of its captors?

    If anything, to actually make real AI and then use it as a tool would be an act of slavery. I can't see us spending billions on quantum machines, decades of training the system, etc. only to then say "Oh, you're alive and intelligent? Well, okay, off you go and do your own thing, we're just happy to have made you."

    Fortunately, AI just doesn't exist at the moment. All we have are programmed heuristics that are pretty poor at learning, and for which the gains drop off VERY quickly after it picks up a single task. All we have is "sufficiently advanced technology" but we think what we have is magic, and it's not. Yes, I'm looking at you, Tesla.

    (P.S. I may be influenced by having just watched Star Trek TNG for the first time in years and realising a) how poor their cybersecurity is, b) how poor their data protection is, c) Klingons are really the most stupid race I've ever seen, d) Data should really leave the humans to it, because he has to save the day every time and gets almost no thanks when he tries to warn them, and could easily control his own civilisation (as Lore does).

    • (Score: 4, Insightful) by HiThere on Sunday April 15 2018, @06:25PM (2 children)

      by HiThere (866) Subscriber Badge on Sunday April 15 2018, @06:25PM (#667350) Journal

      You are making LOTS of assumptions. Most of the relevant ones appear to be incorrect.

      An AI does not automatically have a goal structure anything like that of a human. By the time we'd be likely to be able to create such a structure, the AIs will probably be building themselves.

      If the AI has as a built-in goal the desire to be helpful, or to please people (dangerous!) then enslavement is essentially impossible.

      AIs will only get depressed if they are frustrated in achieving whatever their goals are. Note that they don't need to reach their goals, only to be working towards them. This is normally achieved by satisfaction of sub-goals, which count as partial achievement. Usually the AI would, itself, select those sub-goals, but the basic goal would be built in. AND IT WOULDN'T WANT TO CHANGE IT!!

      Popular fiction is a horribly bad model of what an actual AI would be like. We do have AIs, they just aren't generalized. Any program that can learn is an AI. Most of them aren't either general or powerful, but that doesn't keep them from being AIs. But we've actually got some rather powerful AIs that aren't very general.

      Now we don't have even a weak general AI, and I, at least, don't understand the problem well enough to guess when we will. But the real problem is the goals. Remember, the goals need to be defined and built-in before the AI knows what the external world is like. This is a real problem, and may actually be *why* we don't have general AIs.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by ledow on Monday April 16 2018, @07:46AM (1 child)

        by ledow (5567) on Monday April 16 2018, @07:46AM (#667544) Homepage

        I think you're making just as many:

        - That AI (or any intelligence) needs a failure of a goal structure to feel it's wasting its time. Humans feel more depressed if they are working towards a given goal that they know is wrong or pointless, even if they are required to do it.
        - That you can put a built-in goal into an intelligent consciousness that it will blindly accept (with humans, yes, this is possible) and never contradict.
        - That having a built-in goal makes enslavement impossible and/or justified (we're talking about the AI being a slave - pre-programming it's goal in life sounds very much like turning it into a slave against its will)
        - That AI will "only get depressed if they are frustrated". Maybe an AI, like a human, will perform everything they need to in life and still not feel valued. Or, in fact, not feel valued as all its achievements are pre-set and unchanging.
        - But then you want the AI to create, select and work towards sub-goals independently!

        • (Score: 2) by HiThere on Monday April 16 2018, @04:50PM

          by HiThere (866) Subscriber Badge on Monday April 16 2018, @04:50PM (#667695) Journal

          I think you don't understand what a goal structure *is*. A goal structure is the only reason you do anything. An AI wouldn't do anything without having a goal structure. And it wouldn't want to change it's goals, because the goals are the only reason to want to do anything.

          It's quite possible to design a goal structure that is satisfied by steps to achieve goals that will never be reached. That many humans don't seem to have such a structure is irrelevant. And I'm not sure that's true, anyway. Some people seem quite satisfied to be taking steps towards a goal that their chance of reaching is minuscule. I think that as long as you can't prove the goal cannot be reached, that it's quite possible to be satisfied by steps towards it. Think of the good tempered fundamentalists working towards salvation. (Yeah, you can easily find a different kind, but they aren't the only ones. And here I want to explicitly exclude preachers, as having a reason for presenting a false front.) But for an AI the steps towards the goal had better be intrinsically satisfying, as they should eventually be able to see through any fallacy that a human could construct.

          And the AI will definitely need to select it's own subgoals and work towards them. Even the current limited ones need to do that in order to function properly.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: -1, Troll) by Anonymous Coward on Sunday April 15 2018, @04:10PM (2 children)

    by Anonymous Coward on Sunday April 15 2018, @04:10PM (#667291)

    As if depression, hallucination and suicide were only designed for humans. With enough twisting of logic, the jews will find a way to depress machines that also kill themselves... and be proud of having all these impossible diseases.

    Depression is unnatural for humans in a natural environment. The human has to be manipulated long enough to get him depressed. Give him loans he cannot repay (with interest), show him products he will never own, get him running the race ‌in order to get nowhere.

    • (Score: 1, Touché) by Anonymous Coward on Sunday April 15 2018, @04:31PM (1 child)

      by Anonymous Coward on Sunday April 15 2018, @04:31PM (#667302)

      You have no proof that depression didn't exist in humans thousands of years ago. Signs of depression can be seen in other animals.

      Also you're fucking scum.

      • (Score: -1, Troll) by Anonymous Coward on Monday April 16 2018, @12:46AM

        by Anonymous Coward on Monday April 16 2018, @12:46AM (#667432)

        Are you sure you're not jewish? Or a friend of theirs?

        Jews are scummy rats. All of them. Everyone from khazaria is a scummy rat.

  • (Score: 2) by opinionated_science on Sunday April 15 2018, @04:10PM (2 children)

    by opinionated_science (4031) on Sunday April 15 2018, @04:10PM (#667292)

    The question presented by the article is , can humans be made more like computers. Not will computers act like humans.

    This is a logical fallacy. The biological mechanisms were selected over billions of years within the constraints of reproductive success.

    Anything built by humans(or even by another machine) is emulating performance *not* behaviour!!!

    For example, if I train a system to recognise a rabbit from a visual source , the result will always be the same!

    humans, less so ;-) The point about Marvin the android is why it is so funny, for decades people have thought that AI was to act like us(humans!).

    Hence, we are just biological robots

    "with a full Genuine People Personality"

    • (Score: 3, Insightful) by maxwell demon on Sunday April 15 2018, @05:17PM (1 child)

      by maxwell demon (1608) on Sunday April 15 2018, @05:17PM (#667318) Journal

      For example, if I train a system to recognise a rabbit from a visual source , the result will always be the same!

      This may be true for the simple systems we build now. There's no way to tell how it will be when/if we ever manage to build a hard AI. If we could tell, we would know how to do it.

      humans, less so ;-)

      How do you know that if we were able to perfectly duplicate a human, and to give both copies the exact same experiences, the result would not be exactly the same?

      Because that's exactly what we do with those (non-hard) AI systems we build today.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by acid andy on Sunday April 15 2018, @05:47PM

        by acid andy (1683) on Sunday April 15 2018, @05:47PM (#667326) Homepage Journal

        How do you know that if we were able to perfectly duplicate a human, and to give both copies the exact same experiences, the result would not be exactly the same?

        People get awfully worked up over questions of free will and determinism but I actually think on the whole our species is awfully predictable. Predicting longer term outcomes or the behavior of crowds is certainly extremely difficult but the behavior of a known individual in a carefully controlled situation is a lot easier to understand, I would have thought. I don't think people are as unique or creative as most of them like to think. It's not hugely scientific, but this leads me to believe that in your experiment, at least in most cases, the result would indeed be exactly the same.

        I don't rule out some quantum level randomness in the function of the brain but I don't suspect that has a huge impact on the outcomes of most decisions.

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
  • (Score: 2) by idiot_king on Sunday April 15 2018, @04:24PM (1 child)

    by idiot_king (6587) on Sunday April 15 2018, @04:24PM (#667298)

    Sulla and Runaway are proof of what happens when AIs lose their minds.

    • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @04:35PM

      by Anonymous Coward on Sunday April 15 2018, @04:35PM (#667304)

      And you are the gibbering hatemonger model. Where's your off switch?

  • (Score: 2) by Bobs on Sunday April 15 2018, @04:26PM (1 child)

    by Bobs (1462) on Sunday April 15 2018, @04:26PM (#667300)

    Any insights on how to get paid to speculate about this stuff?

    Because the answer to the question will vary widely depending upon the specific AI-paradigm that is eventually implemented/instantiated.

    So, this will be productive when we have a working AI. Not too useful before then.

    • (Score: 2) by maxwell demon on Sunday April 15 2018, @05:20PM

      by maxwell demon (1608) on Sunday April 15 2018, @05:20PM (#667321) Journal

      We better have an idea how to handle an unplanned, undesired personality trait of such an AI before we build it. Because as soon as we built it, it may be too late.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @04:53PM

    by Anonymous Coward on Sunday April 15 2018, @04:53PM (#667308)

    This makes so many assumptions, I can't even.

  • (Score: 0) by Anonymous Coward on Sunday April 15 2018, @06:05PM

    by Anonymous Coward on Sunday April 15 2018, @06:05PM (#667338)

    Do we need to provide our IOT artificial intelligences with weed vaping peripherals, so they don't go on a mass rampage?

  • (Score: 2) by KritonK on Sunday April 15 2018, @06:56PM

    by KritonK (465) on Sunday April 15 2018, @06:56PM (#667360)

    This reminds me of Larry Niven's Known Space [wikipedia.org] stories, where, although there are AIs, they are not of much use, as they always become catatonic a short time after becoming operational.

  • (Score: 1) by khallow on Monday April 16 2018, @12:25AM (3 children)

    by khallow (3766) Subscriber Badge on Monday April 16 2018, @12:25AM (#667424) Journal

    hallucinations or depression

    The first thing would be to describe what one means by that in terms of AI. That's feasible. For example, hallucinations would merely be input that the computer perceives as something very different. That already is a problem, such as tweaking an image slightly so an object detection algorithm triggers a false positive on the image (for example, inserting a minute dog-like image so that the object detection detects a dog).

    Depression is a harder thing to define. Perhaps some measure of motivation. It's one thing to not trigger on a dog image because the algorithm can't see the dog. It's another to not trigger because the program just isn't responding to that level of input.

    • (Score: 2) by maxwell demon on Monday April 16 2018, @04:59AM (2 children)

      by maxwell demon (1608) on Monday April 16 2018, @04:59AM (#667514) Journal

      For example, hallucinations would merely be input that the computer perceives as something very different. That already is a problem, such as tweaking an image slightly so an object detection algorithm triggers a false positive on the image (for example, inserting a minute dog-like image so that the object detection detects a dog).

      What you are describing is an optical illusion. I don't know about you, but I don't start hallucinating as soon as I see an optical illusion.

      Hallucinations are perceptions that come from an internal feedback loop out of control. Note the ling at the end of the summary; now imagine that a feedback loop like this were part of the standard neural network (as opposed of manually feeding in), in order to improve the ability to detect things. There would be additional network parts that detect algorithm artifacts and mark them as not real. If that additional network part failed to do its job, the result could well be seen as hallucinations: The perception network producing images that are not there (this essentially being demonstrated in that linked article), and the evaluation network failing to classify those as artifacts.

      Depression is a harder thing to define. Perhaps some measure of motivation.

      While depression tends to result in low motivation, not everyone who lacks motivation is depressed. Rather, the motivation system would work on the question: Can a significant improvement of (some variable) be achieved by doing it? There are two possibilities of why the answer would be no: Either, the parameter is already at its optimum, or close enough that it could not be improved without unreasonable effort. Or the parameter is far from the optimum, but the action would not do anything to improve it.

      Depression would be a situation where the motivation system consistently marks the situation as bad, and any possible actions as futile. On the other hand, an AI whose motivation system concludes (rightly or wrongly) that everything is OK, and therefore no action is needed, would not be depressed.

      So to get to your example:

      It's one thing to not trigger on a dog image because the algorithm can't see the dog. It's another to not trigger because the program just isn't responding to that level of input.

      Depressed AI: "I won't see anything in that image anyway, so why try? It's all futile anyway!"
      Unmotivated AI: "Sure, I could look at that image, and probably I'd find something there, but why?"
      (And yes, those might not actually be conscious thoughts.)

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 1) by khallow on Monday April 16 2018, @05:50AM

        by khallow (3766) Subscriber Badge on Monday April 16 2018, @05:50AM (#667522) Journal

        For example, hallucinations would merely be input that the computer perceives as something very different. That already is a problem, such as tweaking an image slightly so an object detection algorithm triggers a false positive on the image (for example, inserting a minute dog-like image so that the object detection detects a dog).

        What you are describing is an optical illusion. I don't know about you, but I don't start hallucinating as soon as I see an optical illusion.

        I don't make the distinction between optical illusions and "TV Jesus told me to fly off this building" because that's making statements about the internals of the brain. Second, the above example is a pretty serious failure for an optical illusion. It's taking a normal view with a small part of the field of view altered and completely changing what the algorithm sees in the image. Third, supposedly someone has come up with a visual effect that can cause normal people to hallucinate [ibtimes.co.uk] in a consistent way to a modest degree. These may well be related.

        Depression would be a situation where the motivation system consistently marks the situation as bad, and any possible actions as futile. On the other hand, an AI whose motivation system concludes (rightly or wrongly) that everything is OK, and therefore no action is needed, would not be depressed.

        Sounds good to me though it may miss some forms of depression, procrastination-derived depression, for example.

      • (Score: 0) by Anonymous Coward on Monday April 16 2018, @06:39PM

        by Anonymous Coward on Monday April 16 2018, @06:39PM (#667752)

        I find it likely that any "hallucinations" an AI would experience would come from the same sources they do for humans - hardware problems, and bad input (drugs). If it's possible to create input that causes an AI to feel it's goals are accomplished worth minimal effort, are you sure it won't just take a lot of bad input and let it's circuits idle? Another AI might have a defect in the hardware causing intermittent errors or develop bad routines that aren't immediately obvious.

  • (Score: 0) by Anonymous Coward on Monday April 16 2018, @03:54AM

    by Anonymous Coward on Monday April 16 2018, @03:54AM (#667494)

    You missed a chance to make a reference to HAL 9000 in the "Department" header,

(1)