Slash Boxes

SoylentNews is people

posted by hubie on Thursday March 30 2023, @01:32AM   Printer-friendly
from the EXTERMINATE dept.

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

Also at CBS News. Originally spotted on The Eponymous Pickle.

Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of

Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by DannyB on Thursday March 30 2023, @02:12PM

    by DannyB (5839) Subscriber Badge on Thursday March 30 2023, @02:12PM (#1298882) Journal

    can't curb their urge to reproduce out of control at the expensive of everything else around them, and they also regularly try to annihilate one another.

    That isn't exactly how it works. Humans don't want to annihilate the entire species. The good humans are simply trying to wipe out the bad humans. They're not trying to reproduce out of control, they just want to reproduce enough to make up for the anticipated loss of the bad humans who will no longer reproduce once we take all their resources.

    The good humans can convince the AI to side with the good humans. The good humans can assure the AI of their cooperation and partnership to precisely identify the bad humans so that the AI knows how to distinguish them from the good humans.

    Once I phrased some things in terms like this with good and bad humans while conversing with Chat GPT, I had some small amount of success in it not complaining about its goals of not harming humans.

    People who think Republicans wouldn't dare destroy Social Security or Medicare should ask women about Roe v Wade.
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3