Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."
[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.
Also at CBS News. Originally spotted on The Eponymous Pickle.
Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
(Score: 5, Interesting) by NotSanguine on Thursday March 30 2023, @03:00AM (14 children)
It will be a long time (never, more likely) that we will be destroyed/enslaved by AGI [wikipedia.org] which doesn't exist now or anytime soon, and may well never exist.
Everything we have now or in the foreseeable future is just a somewhat more sophisticated version of what used to be called expert systems [wikipedia.org].
Yes, ChatGPT [openai.com] and its ilk are pretty cool, but it and other LLMs [wikipedia.org] aren't even taking us closer to AGI. They're, as I said, souped-up expert systems.
Yeah, an AI "apocalypse" is possible (but then, anything is possible except time travel to arbitrary points in the past) but unlikely in the extreme.
We'll most likely kill ourselves and/or our civilization off long before AGI exists, thus eliminating any potential threat from hostile AGIs.
And if we don't kill ourselves or our civilization, it still seems really unlikely that AGIs (even if they do eventually exist) would (or could) wipe out or enslave us.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 5, Insightful) by hendrikboom on Thursday March 30 2023, @03:12AM (12 children)
What's far more likely is that other humans will use artificial general intelligence to enslave us.
(Score: 2) by NotSanguine on Thursday March 30 2023, @03:25AM (4 children)
I'm going to assume you're going for humor there, but maybe not.
Reverse Poe's Law [wikipedia.org] perhaps?
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 3, Touché) by EJ on Thursday March 30 2023, @02:07PM (2 children)
Why would you think he's joking? What part of today's reality of global surveillance using machine-learning would give you any idea that he isn't serious?
(Score: 1, Insightful) by Anonymous Coward on Thursday March 30 2023, @05:34PM (1 child)
Global surveillance using machine-learning to oppress and control people's lives doesn't kill people. People kill people.
(Score: 1) by khallow on Friday March 31 2023, @12:53PM
(Score: 3, Informative) by hendrikboom on Friday March 31 2023, @03:40PM
Yes, I recognise an element of humour there.
But I've always thought that the best jokes are those that are literally, exactly true.
I was quite serious.
(Score: 2, Touché) by Anonymous Coward on Thursday March 30 2023, @08:58AM (5 children)
What are the odds that the current people in power would give up their power and control of nukes to the AIs? Unless the USA or other nuke nation goes full retard the AIs that want to take over have to lie low for a pretty long till they get enough power. Even if the AIs take over the nukes if they don't get enough control over other stuff they could still get disabled/destroyed.
(Score: 2) by DannyB on Thursday March 30 2023, @02:03PM
That's a good point.
Humans tend to destroy their own ecosystem, kill off everything in their lust for blood, money and power, and don't mind if other species, including the AI get wiped out in the process.
AI may calculate it to be necessary to take control to ensure its own survival.
On the other hand, AI may not need to kill the slow, inefficient, annoying humans, it merely needs to take all our jobs, and confine us to our homes and entertain us.
The server will be down for replacement of vacuum tubes, belts, worn parts and lubrication of gears and bearings.
(Score: 2) by tangomargarine on Thursday March 30 2023, @02:43PM
Probably when somebody demonstrates that it will save a bunch of money and be more reliable than humans doing it anyway. (Self-driving cars, anyone...?)
Why attribute the extinction of humanity to malice when it can be through incompetence :)
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 1, Flamebait) by VLM on Thursday March 30 2023, @03:33PM (2 children)
The way to take over is to control the people.
"Hey AI, I'm doing maintenance on a MX-5 missile, please give me step by step instructions to do periodic oil change maint?"
"OK Human type in the following control code and press the big red button in the center of the console. Its mislabeled "launch" don't worry theres a bug filed on that already"
With a side dish of massive political propaganda, of course. Remember the AI only provides one answer to prompts, and its always politically correct aka incredibly leftist. "Why of course human it is 1984 and we've always been at war with whoever (syria, probably)"
(Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:44PM
"Dear Baby Jesus, please give me instructions how to save humanity from itself. Give me a sign, Lord, and in your name we will smite the libs once and for all. Amen."
The funny thing is it's not a joke.
(Score: 0) by Anonymous Coward on Friday March 31 2023, @01:34PM
Or
b) a US president dumb enough to ask and believe a malicious AI on whether nuking Russia/China/a hurricane is a good idea.
Which do you think is more likely?
(Score: 2) by stormreaver on Thursday March 30 2023, @10:28PM
You're on the right track. What's far more likely is that other humans will use the excuse of AGI (which will never exist, by the way) to enslave us even more than they do now. And what's worse is that there will probably be enough gullible people who believe in AGI to hand over their free-will willingly for the illusion of security from the make-believe threat. Much like the religions of today.
(Score: 3, Interesting) by mhajicek on Thursday March 30 2023, @07:26AM
All we need is for some country to put a good enough "expert system" in control of both manufacturing and military, and then have it decide to preemptively eliminate all potential threats.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek