Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."
[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.
Also at CBS News. Originally spotted on The Eponymous Pickle.
Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
(Score: 3, Insightful) by EJ on Thursday March 30 2023, @12:36PM (5 children)
You're missing the entire point. Perhaps you've heard of botnets that carry out DDOS attacks to bring down major company websites. The people who use those botnets didn't manufacture the hardware. They didn't NEED to. It was made for them by idiot companies with no understanding of how dangerous their products could be.
Your "smart" refrigerator could be part of a botnet right now without you even knowing it. Even your phone could be infected, sending out one or two packets every few seconds. You wouldn't notice, but the aggregate of all that is extremely powerful.
Once all the AI-powered cars are filling the streets, then they're ready to be used by the bad actors. My point is that we don't need to be worried about AI deciding to attack humanity. HUMANS will direct them to do it.
You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.
Even USB keys have recently been weaponized to explode when plugged in. You VASTLY underestimate the capacity for technology to be subverted.
Wait for body implants to become more commonplace. Elective brain implants to pump your Tweeter feed right into your mind will eventually become reality, and then the hackers just stroke you out dead.
(Score: 0, Troll) by khallow on Thursday March 30 2023, @01:35PM (4 children)
That's why this alleged realistic scenario was second after your 12 Monkeys scenario? The only reason we're talking about wifi gas ovens is because the other scenarios were so easy to dismiss. My take is that insecure IoT will collapse long before the AI apocalypse because of how easy it is to hack.
(Score: 2) by EJ on Thursday March 30 2023, @02:04PM (3 children)
No. It isn't magic thinking. You're simply taking things too literally and thinking inside the box. The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.
Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.
You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.
My entire take on the matter is that it won't so much be AI that makes that decision. If AI develops the way those working on it expect, then it won't be the AI that needs to decide to kill people. Humans will be right there to help it along.
The only reason I'm talking about gas ovens is because you seem to lack the imagination to entertain the thought that there could be something you haven't thought of. I picked that example because I thought it might be simple enough for you to comprehend.
(Score: 2) by tangomargarine on Thursday March 30 2023, @02:35PM (1 child)
Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."
Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 1) by khallow on Thursday March 30 2023, @05:28PM
That was Quantum Apostrophe. I see a "space nutter" post here in search so he might have been by once. He also really hated 3D printing.
OTOH, I don't spend my time trying to spin fantasy scenarios to try to stop technological progress.
(Score: 1) by khallow on Thursday March 30 2023, @05:17PM
The company he owned gave him a way to do that. Right there we have turned it from a problem that anyone can do with some equipment and an AI to tell them what to do to a very small group with very specialized knowledge.
Like what? Sorry, technology didn't change that much in 20 years.
How far in the future? My take is that you are speaking of technology you don't understand. And sure, it can kill us in ways we don't yet understand. My point is that speaking of an AI capable of controlling virtually all internet-linked stuff on the planet to kill humans will take a vast amount of computing power and a very capability AI. That is the magic I believe you continue to speak of.
We are nowhere near that, and likely to run into all sorts of knowledge, problems, and corrections/changes that will render our current musings as irrelevant.