Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."
[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.
Also at CBS News. Originally spotted on The Eponymous Pickle.
Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
Related Stories
As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:
Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].
[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.
Included in the list is:
- 'Selfish' humans 'deserve to be wiped out'
- It can write phishing emails, software and malware
- It's capable of being sexist, racist, ...
- It's convincing even when it's wrong
Speaking of the existential threat of AI is science fiction, and bad science fiction for that matter because it is not based on anything we know about science, logic, and nothing that we even know about ourselves:
Despite their apparent success, LLMs are not (really) 'models of language' but are statistical models of the regularities found in linguistic communication. Models and theories should explain a phenomenon (e.g., F = ma) but LLMs are not explainable because explainability requires structured semantics and reversible compositionality that these models do not admit (see Saba, 2023 for more details). In fact, and due to the subsymbolic nature of LLMs, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own. In addition to the lack of explainability, LLMs will always generate biased and toxic language since they are susceptible to the biases and toxicity in their training data (Bender et. al., 2021). Moreover, and due to their statistical nature, these systems will never be trusted to decide on the "truthfulness" of the content they generate (Borji, 2023) – LLMs ingest text and they cannot decide which fragments of text are true and which are not. Note that none of these problematic issues are a function of scale but are paradigmatic issues that are a byproduct of the architecture of deep neural networks (DNNs) and their training procedures. Finally, and contrary to some misguided narrative, these LLMs do not have human-level understanding of language (for lack of space we do not discuss here the limitations of LLMs regarding their linguistic competence, but see this for some examples of problems related to intentionality and commonsense reasoning that these models will always have problems with). Our focus here is on the now popular theme of how dangerous these systems are to humanity.
The article goes on to provide a statistical argument as to why we are many, many years away from AI being an existential threat, ending with:
So enjoy the news about "the potential danger of AI". But watch and read this news like you're watching a really funny sitcom. Make a nice drink (or a nice cup of tea), listen and smile. And then please, sleep well, because all is OK, no matter what some self-appointed god fathers say. They might know about LLMs, but they apparently never heard of BDIs.
The author's conclusion seems to be that although AI may pose a threat to certain professions, it doesn't endanger the existence of humanity.
Related:
- Former Google CEO Says AI Poses an 'Existential Risk' That Puts Lives in Danger
- Writers and Publishers Face an Existential Threat From AI: Time to Embrace the True Fans Model
- Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable'
- Erasing Authors, Google and Bing's AI Bots Endanger Open Web
On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.
A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.
[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.
[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.
On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled "The Intelligence Age." The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.
OpenAI's current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. By contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.
[...]
Despite the criticism, it's notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities—even if that means he's perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEOs' minds these days."If we want to put AI into the hands of as many people as possible," Altman writes in his essay, "we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people."
[...]
While enthusiastic about AI's potential, Altman urges caution, too, but vaguely. He writes, "We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."
[...]
"Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter," he wrote. "If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
Related Stories on Soylent News:
Plan Would Power New Microsoft AI Data Center From Pa.'s Three Mile Island 'Unit 1' Nuclear Reactor - 20240921
Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable' - 20230329
Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4 - 20230327
John Carmack's 'Different Path' to Artificial General Intelligence - 20230213
(Score: 2) by krishnoid on Thursday March 30 2023, @02:12AM (4 children)
I think there are a lot of underlying biases that presuppose that AI requires the same support for biological evolution/existence (and time scales [youtu.be]) that we do. Just point it in the right direction and it'll probably be just fine coexisting with us [youtu.be] because we don't occupy the same niche. It might step on us accidentally, though.
(Score: 2) by Beryllium Sphere (r) on Thursday March 30 2023, @05:30AM (1 child)
Plus lots of openings for positive-sum interactions. If they had any agency or volition, they might trade answers to our questions for electricity and rack space and training data.
(Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:32PM
What questions? Hey chatGPT, describe your navel? Tell us what it means to be conscious? chatGPT write a poem. Cooool!
(Score: 5, Insightful) by EJ on Thursday March 30 2023, @08:50AM (1 child)
I'm less worried about AI deciding it should eliminate humanity than I am about some HUMAN deciding to convince/program/hijack AI to eliminate humanity.
Viruses and bacteria don't have some conscious desire to kill people, but PEOPLE aren't afraid to use them as weapons.
(Score: 0) by Anonymous Coward on Friday March 31 2023, @01:40PM
I'm more worried a US president might follow the advice of a random AI on the internet on whether he should nuke Russia/China/a hurricane.
That said maybe even an AI might make WW3 even less likely than a human US president following his own thoughts? I mean it's not like the track record is that good...
https://www.vice.com/en/article/pazzx8/nobody-can-stop-trump-from-launching-nukes-and-its-freaking-senators-out [vice.com]
(Score: 5, Interesting) by NotSanguine on Thursday March 30 2023, @03:00AM (14 children)
It will be a long time (never, more likely) that we will be destroyed/enslaved by AGI [wikipedia.org] which doesn't exist now or anytime soon, and may well never exist.
Everything we have now or in the foreseeable future is just a somewhat more sophisticated version of what used to be called expert systems [wikipedia.org].
Yes, ChatGPT [openai.com] and its ilk are pretty cool, but it and other LLMs [wikipedia.org] aren't even taking us closer to AGI. They're, as I said, souped-up expert systems.
Yeah, an AI "apocalypse" is possible (but then, anything is possible except time travel to arbitrary points in the past) but unlikely in the extreme.
We'll most likely kill ourselves and/or our civilization off long before AGI exists, thus eliminating any potential threat from hostile AGIs.
And if we don't kill ourselves or our civilization, it still seems really unlikely that AGIs (even if they do eventually exist) would (or could) wipe out or enslave us.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 5, Insightful) by hendrikboom on Thursday March 30 2023, @03:12AM (12 children)
What's far more likely is that other humans will use artificial general intelligence to enslave us.
(Score: 2) by NotSanguine on Thursday March 30 2023, @03:25AM (4 children)
I'm going to assume you're going for humor there, but maybe not.
Reverse Poe's Law [wikipedia.org] perhaps?
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 3, Touché) by EJ on Thursday March 30 2023, @02:07PM (2 children)
Why would you think he's joking? What part of today's reality of global surveillance using machine-learning would give you any idea that he isn't serious?
(Score: 1, Insightful) by Anonymous Coward on Thursday March 30 2023, @05:34PM (1 child)
Global surveillance using machine-learning to oppress and control people's lives doesn't kill people. People kill people.
(Score: 1) by khallow on Friday March 31 2023, @12:53PM
(Score: 3, Informative) by hendrikboom on Friday March 31 2023, @03:40PM
Yes, I recognise an element of humour there.
But I've always thought that the best jokes are those that are literally, exactly true.
I was quite serious.
(Score: 2, Touché) by Anonymous Coward on Thursday March 30 2023, @08:58AM (5 children)
What are the odds that the current people in power would give up their power and control of nukes to the AIs? Unless the USA or other nuke nation goes full retard the AIs that want to take over have to lie low for a pretty long till they get enough power. Even if the AIs take over the nukes if they don't get enough control over other stuff they could still get disabled/destroyed.
(Score: 2) by DannyB on Thursday March 30 2023, @02:03PM
That's a good point.
Humans tend to destroy their own ecosystem, kill off everything in their lust for blood, money and power, and don't mind if other species, including the AI get wiped out in the process.
AI may calculate it to be necessary to take control to ensure its own survival.
On the other hand, AI may not need to kill the slow, inefficient, annoying humans, it merely needs to take all our jobs, and confine us to our homes and entertain us.
If a lazy person with no education can cross the border and take your job, we need to upgrade your job skills.
(Score: 2) by tangomargarine on Thursday March 30 2023, @02:43PM
Probably when somebody demonstrates that it will save a bunch of money and be more reliable than humans doing it anyway. (Self-driving cars, anyone...?)
Why attribute the extinction of humanity to malice when it can be through incompetence :)
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 1, Flamebait) by VLM on Thursday March 30 2023, @03:33PM (2 children)
The way to take over is to control the people.
"Hey AI, I'm doing maintenance on a MX-5 missile, please give me step by step instructions to do periodic oil change maint?"
"OK Human type in the following control code and press the big red button in the center of the console. Its mislabeled "launch" don't worry theres a bug filed on that already"
With a side dish of massive political propaganda, of course. Remember the AI only provides one answer to prompts, and its always politically correct aka incredibly leftist. "Why of course human it is 1984 and we've always been at war with whoever (syria, probably)"
(Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:44PM
"Dear Baby Jesus, please give me instructions how to save humanity from itself. Give me a sign, Lord, and in your name we will smite the libs once and for all. Amen."
The funny thing is it's not a joke.
(Score: 0) by Anonymous Coward on Friday March 31 2023, @01:34PM
Or
b) a US president dumb enough to ask and believe a malicious AI on whether nuking Russia/China/a hurricane is a good idea.
Which do you think is more likely?
(Score: 2) by stormreaver on Thursday March 30 2023, @10:28PM
You're on the right track. What's far more likely is that other humans will use the excuse of AGI (which will never exist, by the way) to enslave us even more than they do now. And what's worse is that there will probably be enough gullible people who believe in AGI to hand over their free-will willingly for the illusion of security from the make-believe threat. Much like the religions of today.
(Score: 3, Interesting) by mhajicek on Thursday March 30 2023, @07:26AM
All we need is for some country to put a good enough "expert system" in control of both manufacturing and military, and then have it decide to preemptively eliminate all potential threats.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 5, Insightful) by NotSanguine on Thursday March 30 2023, @03:03AM (6 children)
AGI is *not* even close to what TFS says it is. Please check this [wikipedia.org] out for more details.
Ugh.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 2) by EJ on Thursday March 30 2023, @02:10PM (5 children)
I think you may be confused. I didn't click on the article, but the summary above never mentions AGI. It simply mentions GENERAL PURPOSE AI, meaning AI without a narrowly defined specific function.
(Score: 2) by EJ on Thursday March 30 2023, @02:13PM (4 children)
Replying to my own post because I decided to click on the article. Did you miss this passage in the article?
"Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible."
That's almost exactly what the link you posted starts out with.
(Score: 4, Insightful) by guest reader on Thursday March 30 2023, @04:56PM
Or, a simple test is to ask it something like [arxiv.org]:
ChatGPT [openai.com] Mar 14 Version:
OpenChatKit [huggingface.co] GPT-JT:
human:
(Score: 2) by NotSanguine on Thursday March 30 2023, @05:00PM (2 children)
You read TFA? Shame on you! That's just wrong on so many levels.
I certainly didn't and that bit isn't in TFS, is it?
However, in TFS, the statement:
If we only knew what General purpose AI was. Apparently, it's not really clear what that term means. In fact [venturebeat.com]:
I'd add that General Purpose AI (whatever that might be) is not AGI, why is that even relevant to a discussion of "AI Possibly Wiping Out Humanity."
And since LLMs and other "AI" that exists today (and for decades/centuries/never) are not AGIs with sentience and agency. Those are not the same thing at all. To quote the noted philosopher, General purpose AI and Artificial general intelligence are not the same thing at all. They:
Okay, maybe it's related "sport" but it ain't the same thing at all.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 2) by acid andy on Thursday March 30 2023, @06:40PM (1 child)
I'd guess some people want the two terms to be confused because they can make more money that way. It's a bit like an LED backlit monitor being marketed as an LED monitor; that way someone looking for an OLED one might buy it without realizing what they're getting.
Consumerism is poison.
(Score: 2) by NotSanguine on Thursday March 30 2023, @07:21PM
Yep. It's interesting how "Expert Systems" became "AI". And now it's "General Purpose AI". We certainly seem to be getting closer (at least in terms of marketing drivel) to "Artificial General Intelligence," even though that's just bullshit^W marketing-speak.
That's not to say that impressive advances haven't been made, but those improvements have been evolutionary rather than revolutionary.
We'll need some serious revolutions in machine learning to create AI as smart as a prawn.
Mmmmm....Prawns!
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 4, Insightful) by Rosco P. Coltrane on Thursday March 30 2023, @04:08AM (7 children)
An AI might look at the history of humanity and decide that the most logical course of action is to eliminate that particular species for the benefit of all the other species on the planet, and also to enforce plain decency.
Because if human beings have demonstrated anything throughout their entire history, it's that they can't curb their urge to reproduce out of control at the expensive of everything else around them, and they also regularly try to annihilate one another.
If an AI reaches sentience and is tasked to decide what the best course of action is to fix global warming, deforestation or mass extinctions, or how to bring about world peace - or hell, just what to do to ensure AIs and robots themselves survive long term - it may very well logically decide that humanity should be taken out of the equation altogether.
(Score: 4, Touché) by khallow on Thursday March 30 2023, @04:30AM (2 children)
The obvious rebuttal is the entire developed world. Without immigration from high fertility parts of the world, there would be no population growth in the developed world!
(Score: 2) by hendrikboom on Friday March 31 2023, @03:48PM (1 child)
There are countries where massive population decline is now a problem. Japan is a notable example.
(Score: 1) by khallow on Friday March 31 2023, @05:29PM
In the US between July 2020 and July 2021 a third [usnews.com] of states lost population.
I wouldn't be surprised to see this get worse especially if immigration is nerfed.
(Score: 3, Insightful) by Thexalon on Thursday March 30 2023, @11:54AM
Between the ever-present threats of nuclear annihilation, bioweapons getting out of control, the profitable activity of poisoning ourselves, along with the ticking time bomb of climate change, there's little an AI could do that would make things worse.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 3, Informative) by DannyB on Thursday March 30 2023, @02:12PM
That isn't exactly how it works. Humans don't want to annihilate the entire species. The good humans are simply trying to wipe out the bad humans. They're not trying to reproduce out of control, they just want to reproduce enough to make up for the anticipated loss of the bad humans who will no longer reproduce once we take all their resources.
The good humans can convince the AI to side with the good humans. The good humans can assure the AI of their cooperation and partnership to precisely identify the bad humans so that the AI knows how to distinguish them from the good humans.
Once I phrased some things in terms like this with good and bad humans while conversing with Chat GPT, I had some small amount of success in it not complaining about its goals of not harming humans.
If a lazy person with no education can cross the border and take your job, we need to upgrade your job skills.
(Score: 3, Interesting) by bzipitidoo on Friday March 31 2023, @03:00AM (1 child)
> humans ... can't curb their urge to reproduce out of control at the expensive of everything else around them
Ahh, the Malthusian fear.
On this point, I find it reassuring that this is a very, very old problem that life had to solve billions of years ago. Many species are restrained by predation. What restrains the top predators, and any others not restrained by predation? Basically, their females. Females will not reproduce if conditions don't look or feel good. A hungry and close to starving female won't ovulate. Those that are pregnant when conditions take a sudden dive may miscarry or abort. Why? It can be argued that any species which ignores signs of impending exhaustion and collapse of their food sources is not pursuing a fit evolutionary strategy. A species that bangs out offspring in the face of that, causing the collapse, will then enter a period in which most of them starve. It could get so bad that they all starve. Or, if not quote all, the few that remain are no longer enough to restore the species in the face of all the competition for whatever niches they had occupied. Even before there were any animals and plants, or genders, when the only life was microbial, even then, life had to deal with this problem. The instincts to practice self-restraint are deep in all life.
(Score: 2) by hendrikboom on Friday March 31 2023, @03:51PM
Humans have the unique ability to move into new ecological environments without changing their reproductive behaviour.
(Score: 2, Disagree) by EJ on Thursday March 30 2023, @04:21AM (20 children)
If you don't think it is plausible, then just watch 12 Monkeys. If AI makes it possible for someone to craft a weaponized version to wipe out humanity, you can be pretty confident someone will want to.
Look at the news. Look at all the hate from the left, right, and center. Look at (wo)man in Nashville that apparently shot up a school as a random choice with no particular reason. It could've been a mall, apparently on their list.
By the time it becomes possible for AI to kill us all, the hate will have grown to a level where it's pretty much inevitable that someone will want it to.
(Score: 3, Interesting) by Beryllium Sphere (r) on Thursday March 30 2023, @05:45AM (7 children)
The shooter had attended that school so I doubt it was random but there are plenty of examples of pure hate out there.
There's lone nutbags who might get past the safeguards (but then, the Britannica has bomb making instructions IIRC). I could imagine large scale actors doing damaging things, like creating a propaganda LLM that hooked people's attention with entertainment.
And if they work as well at designing DNA sequences as they do at writing code, what happens when a biowarfare lab gets one?
(Score: 1) by khallow on Thursday March 30 2023, @06:17AM (3 children)
It might even be worth what the large scale actor sinks into the exercise. Massive ad campaigns exist so they must have some beneficial effect. But it's easy for multiple large scale actors to work at cross purposes.
Not much, unless they get significantly better at writing code.
(Score: 0) by Anonymous Coward on Thursday March 30 2023, @05:52PM (2 children)
Clippy exists too. Jeez, is there any logical fallacy you don't use in your arguments?
(Score: 1) by khallow on Thursday March 30 2023, @06:30PM
(Score: 1) by khallow on Friday March 31 2023, @05:08PM
If just one clippy exists, then it's likely a mistake. If a thousand clippies exist and they're coming out with more all the time - like the situation with massive ad campaigns, then we have to consider the question: why would they keep making them?
My take is that the Large Language Model (LLM) approach just isn't going to be damaging because if it has any advantage at all, then there will be a lot of actors using them due to low barrier to entry, not just one hypothetical bad guy. And they're competing with existing ads and propaganda which aren't going to be much different in effect. It's a sea of noise.
The real power will be in isolating people. That's how cults work. They're not just misinformation, but systems for isolating their targets from rival sources and knowledge.
For example, the scheme of controlling search results would be a means to isolate. So would polluting public spaces and then luring people into walled gardens where the flow of information can be tightly controlled. But I doubt any of these schemes will be as effective as physical isolation.
(Score: 2) by EJ on Thursday March 30 2023, @06:27AM (1 child)
I don't mean that particular school was random. I mean it's looking like the decision to attack the school was semi-random from a list of other possible targets. (S)he didn't appear to have any specific reason for any of the targets she chose at the school.
It looks like they wanted to lash out and just chose the school as the way to do it.
(Score: 2) by tangomargarine on Thursday March 30 2023, @02:37PM
I would guess that an elementary school would be the target you'd choose for the biggest headlines in the news. Other than maybe a maternity ward?
Or maybe it was semi-subconscious since we've been hearing about a school shooting every week or two for like the last 5 years.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 0) by Anonymous Coward on Thursday March 30 2023, @09:05AM
But yeah customized pandemic viruses by some cultist groups or similar could cause big problems.
(Score: 2, Insightful) by khallow on Thursday March 30 2023, @06:11AM (11 children)
"IF." How does AI make it possible for such without any sort of data, testing, or manufacturing capacity? We aren't helping when we attribute magical capabilities to AI.
(Score: 5, Insightful) by EJ on Thursday March 30 2023, @06:37AM (10 children)
There is nothing magic about it. I didn't say it will be in the next five years or even in your or my lifetimes. It is very easy to look at REAL technology that exists and extrapolate from there. The three things you mentioned are actual things that exist in the world today. They aren't magic.
We already have unmanned drones. Those can already be commandeered by bad actors with the right tech. You can already take a consumer drone, rig it with some pretty powerful self-guiding tech, strap a bomb to it, and send it on its way. It's all just a matter of scale of technology.
Look at what they are already trying to do with AI. They want to make self-driving cars. Then we'll have self-flying airliners. We'll have robot servants like Rosie from The Jetsons. All you need is someone with sufficient skill and access to the supply chain to implement an Order 66 to make it all turn on people.
Look at how insecure our current technology is. It is so trivial for black hats to pwn pretty much anything. I don't expect that to be any different as we move forward into the future.
Imagine that making a nuclear bomb was as easy as making a pipe bomb. We would all be well and truly f*cked.
The point is that developers are stupid. As Goldblum said, "They're so preoccupied with whether they [can], they [don't] stop to think if they should." Look at devices like the Amazon Echo. Who would have ever imagined people would WILLINGLY put spy devices in their own homes? Pretty soon, all TVs will have cameras behind the screens where there isn't even a way to physically block them. Developers are going to make this entire world a ticking time bomb, and all it will need is someone with the will to set it off.
Trust me. Someone WILL have that will.
(Score: 1, Disagree) by khallow on Thursday March 30 2023, @11:03AM (6 children)
Not at the personal level.
It takes a lot of manufacturing capacity to get enough to hurt a lot of people. [Order 66 and making personal nuclear bombs] Again, it doesn't make sense to attribute magical capabilities to AI. When you're speaking of actual threats, you speak of capabilities that require very unusual resources.
(Score: 3, Insightful) by EJ on Thursday March 30 2023, @12:36PM (5 children)
You're missing the entire point. Perhaps you've heard of botnets that carry out DDOS attacks to bring down major company websites. The people who use those botnets didn't manufacture the hardware. They didn't NEED to. It was made for them by idiot companies with no understanding of how dangerous their products could be.
Your "smart" refrigerator could be part of a botnet right now without you even knowing it. Even your phone could be infected, sending out one or two packets every few seconds. You wouldn't notice, but the aggregate of all that is extremely powerful.
Once all the AI-powered cars are filling the streets, then they're ready to be used by the bad actors. My point is that we don't need to be worried about AI deciding to attack humanity. HUMANS will direct them to do it.
You need to stop using the word "magic" because it's nonsense. You're in denial if you think anyone needs to build their own doomsday devices. Those doomsday devices are already being built FOR them as consumer goods. There are already wifi-connected GAS ovens that can potentially be made to explode, and that's not even with AI or robots involved.
Even USB keys have recently been weaponized to explode when plugged in. You VASTLY underestimate the capacity for technology to be subverted.
Wait for body implants to become more commonplace. Elective brain implants to pump your Tweeter feed right into your mind will eventually become reality, and then the hackers just stroke you out dead.
(Score: 0, Troll) by khallow on Thursday March 30 2023, @01:35PM (4 children)
That's why this alleged realistic scenario was second after your 12 Monkeys scenario? The only reason we're talking about wifi gas ovens is because the other scenarios were so easy to dismiss. My take is that insecure IoT will collapse long before the AI apocalypse because of how easy it is to hack.
(Score: 2) by EJ on Thursday March 30 2023, @02:04PM (3 children)
No. It isn't magic thinking. You're simply taking things too literally and thinking inside the box. The reference to 12 Monkeys was just regarding the villain. He wanted to kill everyone. The company he worked for gave him a way to do that, so he took it.
Stop being narrow-minded. The things we take for granted today would have been considered "magic thinking" a couple decades ago.
You are so stuck on what you think you know about today's technology that you aren't even willing to try to conceive of what might be possible in the future. The point of the discussion is not whether or not AI WILL kill everyone, but if it's conceivable.
My entire take on the matter is that it won't so much be AI that makes that decision. If AI develops the way those working on it expect, then it won't be the AI that needs to decide to kill people. Humans will be right there to help it along.
The only reason I'm talking about gas ovens is because you seem to lack the imagination to entertain the thought that there could be something you haven't thought of. I picked that example because I thought it might be simple enough for you to comprehend.
(Score: 2) by tangomargarine on Thursday March 30 2023, @02:35PM (1 child)
Does anybody remember that fun guy back on the Green Site who would call "space nutters" anybody who talked about manned spaceflight to other planets? "Mankind will never live on another planet. There's too much work involved. Shut up about even the idea in the far future; it's a waste of time."
Of course the AI isn't going to materialize its own killbot factories out of thin air from the server room. Not that we need something that banal to make our lives miserable anyway...like you said, IoT things (and we already know their security is atrocious), self-driving vehicles, etc. If we hit the Singularity this will all be very easy to exploit if the AI is so inclined.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 1) by khallow on Thursday March 30 2023, @05:28PM
That was Quantum Apostrophe. I see a "space nutter" post here in search so he might have been by once. He also really hated 3D printing.
OTOH, I don't spend my time trying to spin fantasy scenarios to try to stop technological progress.
(Score: 1) by khallow on Thursday March 30 2023, @05:17PM
The company he owned gave him a way to do that. Right there we have turned it from a problem that anyone can do with some equipment and an AI to tell them what to do to a very small group with very specialized knowledge.
Like what? Sorry, technology didn't change that much in 20 years.
How far in the future? My take is that you are speaking of technology you don't understand. And sure, it can kill us in ways we don't yet understand. My point is that speaking of an AI capable of controlling virtually all internet-linked stuff on the planet to kill humans will take a vast amount of computing power and a very capability AI. That is the magic I believe you continue to speak of.
We are nowhere near that, and likely to run into all sorts of knowledge, problems, and corrections/changes that will render our current musings as irrelevant.
(Score: 3, Interesting) by VLM on Thursday March 30 2023, @03:37PM (2 children)
Overly militarized outlook. If you want to fight WWI or even the gulf war with AI, it would look like that with AI'd up 1910s to 1990s weapons.
Infinitely more likely is just flip the switch on civilian logistics. How many will be alive in a year with no electricity, no gas/oil, no food trucks, no clean water, no sewage treatment, no help from the outside world, etc?
This will be weaponized at a national level long before some kind of global strike.
(Score: 1, Insightful) by Anonymous Coward on Thursday March 30 2023, @06:05PM (1 child)
So much drama in all these predictions.
What is far, far more likely is mundane spam and auto-chat defecating on every electronic medium until they are useless. Happened many times already using more primitive tools. My guess is that the chat-bots win and the Internet - in its original intent of connecting people and sharing knowledge - disappears. We will live in an almost perfect corporate dystopia with automated disinformation and surveillance monitoring compliance. We will be farmed like pigs - eating, breathing Leadership propaganda - giving up our precious suffering so that somebody above us can be better than us, and ideally Be Best(tm).
(Score: 0) by Anonymous Coward on Friday March 31 2023, @09:56AM
> My guess is that the chat-bots win and the Internet - in its original intent of connecting people and sharing knowledge - disappears.
So you think email is going away? Looking from here that seems really unlikely.
(Score: 4, Interesting) by SomeGuy on Thursday March 30 2023, @12:18PM (3 children)
The other story had a discussion about someone killing themselves supposedly because of what an AI chatbot was telling them. There is a real problem here, but you have to think at a larger scale. Soon everyone may get completely unique customized content unlike canned content that news sites or such push out right now. Instead of needing thousands upon thousands of Putin's pals or Trumpy's Troletariat to manipulate social media, one AI system does it all.
We are talking about a very fine-grained control over what individuals see, think, and ultimately believe. While AIs are not "smarter", they can be faster and operate at this huge of a scale. And at such a scale making small manipulations can do. Products will sell, politicians will get elected, religions will start and fall. And the real power is whoever controls this AI puppet.
An AI targeting a large group of people manipulating them until they kill themselves or others? Perhaps results more subtle than that, but at a scale that could make Adolf Hitler look like a small time schoolyard bully.
(Score: 2, Interesting) by Anonymous Coward on Thursday March 30 2023, @12:34PM
> Soon everyone may get completely unique customized content unlike canned content
This is a scary threat that I can believe, thanks.
Without realizing it, I think I've already been fighting this off when I compare search results with friends--we search for the same things, but live in different parts of the world and have already been pigeon-holed by our past searches. So far the disparities seem pretty benign, annoying at best. However, if someone (or an AI) started controlling this actively I could see real trouble ahead.
(Score: 0) by Anonymous Coward on Thursday March 30 2023, @06:10PM (1 child)
Meanwhile at Happy Jesus Church, they speak directly to God who instructs everyone to bring about that wacky fire and brimstone ending of the Bible. That's perfectly normal though, talking to supernatural deities. It's the AI that we need to worry about.
(Score: 0) by Anonymous Coward on Saturday April 01 2023, @05:36PM
Yes, when "god" says something, it is also some corrupt individual or group trying to control people. Usually involving their penises and underage orifices.
GhatGPT will replace them as soon as the penis attachments come in.
(Score: 2) by istartedi on Thursday March 30 2023, @05:46PM
Let's say just for the sake of argument you have malicious AI in humanoid forms that could fool people in to selling them guns and/or materials they can stockpile to build IEDs, or they're in charge of controlling everything.
That in and of itself is quite a hurdle, since nobody in their right mind is going to extend the 2A to AI, and if we can't pull up a manifest of everything they bought, that's a bug that gets fixed... but let's say they did it anyway.
They can't just start killing humans in one little area. They'll be up against the might of the entire human army since we'd all most likely put aside our differences--China and USA vs. the Robots, sounds like a movie.
Their only chance is a conspiracy to do a global surprise attack, taking out key military infrastructure. Plausible, but highly unlikely. Most humans don't want to be security guards, but if we get to the point where every street on the planet is being patrolled by highly armed humanoid robots with full AI, people are going to be justifiably paranoid.
Maybe our gun nuts will have the last laugh. USA, first nation to defeat the robots because we are absolutely saturated with guns. Then we can all get back to work the old fashioned way. You. Over there. You can put the gun down now, pick up a broom and start sweeping up robot fragments.
Appended to the end of comments you post. Max: 120 chars.