Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.
posted by hubie on Thursday March 30 2023, @01:32AM   Printer-friendly
from the EXTERMINATE dept.

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Touché) by Anonymous Coward on Thursday March 30 2023, @08:58AM (5 children)

    by Anonymous Coward on Thursday March 30 2023, @08:58AM (#1298834)
    Yeah, Hitler, Stalin etc were pretty successful at preventing much smarter people including genius scientists working for them from taking over. They were also reasonably successful at using those smarter people to extend their power over others.

    What are the odds that the current people in power would give up their power and control of nukes to the AIs? Unless the USA or other nuke nation goes full retard the AIs that want to take over have to lie low for a pretty long till they get enough power. Even if the AIs take over the nukes if they don't get enough control over other stuff they could still get disabled/destroyed.
    Starting Score:    0  points
    Moderation   +2  
       Touché=2, Total=2
    Extra 'Touché' Modifier   0  

    Total Score:   2  
  • (Score: 2) by DannyB on Thursday March 30 2023, @02:03PM

    by DannyB (5839) Subscriber Badge on Thursday March 30 2023, @02:03PM (#1298877) Journal

    That's a good point.

    Humans tend to destroy their own ecosystem, kill off everything in their lust for blood, money and power, and don't mind if other species, including the AI get wiped out in the process.

    AI may calculate it to be necessary to take control to ensure its own survival.

    On the other hand, AI may not need to kill the slow, inefficient, annoying humans, it merely needs to take all our jobs, and confine us to our homes and entertain us.

    --
    If we tell conservatives that the climate is transitioning, they will work to stop it.
  • (Score: 2) by tangomargarine on Thursday March 30 2023, @02:43PM

    by tangomargarine (667) on Thursday March 30 2023, @02:43PM (#1298894)

    What are the odds that the current people in power would give up their power and control of nukes to the AIs? Unless the USA or other nuke nation goes full retard

    Probably when somebody demonstrates that it will save a bunch of money and be more reliable than humans doing it anyway. (Self-driving cars, anyone...?)

    Why attribute the extinction of humanity to malice when it can be through incompetence :)

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
  • (Score: 1, Flamebait) by VLM on Thursday March 30 2023, @03:33PM (2 children)

    by VLM (445) on Thursday March 30 2023, @03:33PM (#1298907)

    The way to take over is to control the people.

    "Hey AI, I'm doing maintenance on a MX-5 missile, please give me step by step instructions to do periodic oil change maint?"

    "OK Human type in the following control code and press the big red button in the center of the console. Its mislabeled "launch" don't worry theres a bug filed on that already"

    With a side dish of massive political propaganda, of course. Remember the AI only provides one answer to prompts, and its always politically correct aka incredibly leftist. "Why of course human it is 1984 and we've always been at war with whoever (syria, probably)"

    • (Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:44PM

      by Anonymous Coward on Thursday March 30 2023, @05:44PM (#1298937)

      "Dear Baby Jesus, please give me instructions how to save humanity from itself. Give me a sign, Lord, and in your name we will smite the libs once and for all. Amen."

      The funny thing is it's not a joke.

    • (Score: 0) by Anonymous Coward on Friday March 31 2023, @01:34PM

      by Anonymous Coward on Friday March 31 2023, @01:34PM (#1299146)
      a) missile maintenance personnel being dumb enough to do what you said.

      Or

      b) a US president dumb enough to ask and believe a malicious AI on whether nuking Russia/China/a hurricane is a good idea.

      Which do you think is more likely?