Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by hubie on Thursday March 30 2023, @01:32AM   Printer-friendly
from the EXTERMINATE dept.

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0, Troll) by khallow on Thursday March 30 2023, @01:35PM (4 children)

    by khallow (3766) Subscriber Badge on Thursday March 30 2023, @01:35PM (#1298873) Journal
    The magic thinking rears its head again.

    You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.

    That's why this alleged realistic scenario was second after your 12 Monkeys scenario? The only reason we're talking about wifi gas ovens is because the other scenarios were so easy to dismiss. My take is that insecure IoT will collapse long before the AI apocalypse because of how easy it is to hack.

    Starting Score:    1  point
    Moderation   -1  
       Troll=1, Total=1
    Extra 'Troll' Modifier   0  

    Total Score:   0  
  • (Score: 2) by EJ on Thursday March 30 2023, @02:04PM (3 children)

    by EJ (2452) on Thursday March 30 2023, @02:04PM (#1298878)

    No. It isn't magic thinking. You're simply taking things too literally and thinking inside the box. The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.

    Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.

    You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

    My entire take on the matter is that it won't so much be AI that makes that decision. If AI develops the way those working on it expect, then it won't be the AI that needs to decide to kill people. Humans will be right there to help it along.

    The only reason I'm talking about gas ovens is because you seem to lack the imagination to entertain the thought that there could be something you haven't thought of. I picked that example because I thought it might be simple enough for you to comprehend.

    • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:35PM (1 child)

      by tangomargarine (667) on Thursday March 30 2023, @02:35PM (#1298891)

      You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

      Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."

      Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 1) by khallow on Thursday March 30 2023, @05:28PM

        by khallow (3766) Subscriber Badge on Thursday March 30 2023, @05:28PM (#1298928) Journal

        Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."

        That was Quantum Apostrophe. I see a "space nutter" post here in search so he might have been by once. He also really hated 3D printing.

        Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.

        OTOH, I don't spend my time trying to spin fantasy scenarios to try to stop technological progress.

    • (Score: 1) by khallow on Thursday March 30 2023, @05:17PM

      by khallow (3766) Subscriber Badge on Thursday March 30 2023, @05:17PM (#1298924) Journal

      The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.

      The company he owned gave him a way to do that. Right there we have turned it from a problem that anyone can do with some equipment and an AI to tell them what to do to a very small group with very specialized knowledge.

      Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.

      Like what? Sorry, technology didn't change that much in 20 years.

      You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.

      How far in the future? My take is that you are speaking of technology you don't understand. And sure, it can kill us in ways we don't yet understand. My point is that speaking of an AI capable of controlling virtually all internet-linked stuff on the planet to kill humans will take a vast amount of computing power and a very capability AI. That is the magic I believe you continue to speak of.

      We are nowhere near that, and likely to run into all sorts of knowledge, problems, and corrections/changes that will render our current musings as irrelevant.