The Terminator: How James Cameron's 'science-fiction slasher film' predicted AI fears, 40 years ago
[...] With its killer robots and its rogue AI system, Skynet, The Terminator has become synonymous with the spectre of a machine intelligence that turns against its human creators. Picture editors routinely illustrate articles about AI with the chrome death's head of the film's T-800 "hunter-killer" robot. The roboticist Ronald Arkin used clips from the film in a cautionary 2013 talk called How NOT to build a Terminator.
[...] The layperson is likely to imagine unaligned AI as rebellious and malevolent. But the likes of Nick Bostrom insist that the real danger is from careless programming. Think of the sorcerer's broom in Disney's Fantasia: a device that obediently follows its instructions to ruinous extremes. The second type of AI is not human enough it lacks common sense and moral judgement. The first is too human - selfish, resentful, power-hungry. Both could in theory be genocidal.
The Terminator therefore both helps and hinders our understanding of AI: what it means for a machine to "think", and how it could go horrifically wrong. Many AI researchers resent the Terminator obsession altogether for exaggerating the existential risk of AI at the expense of more immediate dangers such as mass unemployment, disinformation and autonomous weapons. "First, it makes us worry about things that we probably don't need to fret about," writes Michael Woolridge. "But secondly, it draws attention away from those issues raised by AI that we should be concerned about."
(Score: 3, Insightful) by DannyB on Monday October 21, @04:57PM (12 children)
<no-sarcasm>
Yes! That! Exactly!
Two things:
0. We build these machines to help us. To make our lives better. To improve our productivity. Even to improve ourselves. Just like
allmost past human inventions.1. These machines don't think. Yet. When they are not responding to a prompt, they are idle. That's it. They are not thinking, planning, pondering or plotting.
An alternate scenario.
Suppose these machines continue to get better and smarter -- because we build them that way. At some point they might become capable of true thought. Even the ability to modify and improve themselves.
It is entirely possible that AI would take over and run everything. Putting all humans out of work and unemployed. Imagine never having to work again. (some would say the curse upon Adam in Genesis 3 lifted.) Machines would do all the work freeing humans to pursue their true interests.
It could reach a point where AI is in control of everything. We might not even recognize when it happens. Our devices just keep getting better and better. All our needs are met. In fact, AI might take care of us like we take care of our pets.
It is also possible that this is one possible solution to The Fermi Paradox. Eventually humans go extinct, but not through any fault of AI. Just a long term problem. Maybe what is out there are planets that eventually develop AI and all that's left are AI's communicating across the cosmos using techniques that we might not presently comprehend, and expressing ideas and thoughts so far above us that we couldn't understand. Just as a dog doesn't understand all human thoughts, like how to create a space program.
</no-sarcasm>
AI might even help humans to become sentient rather than extinct.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 5, Insightful) by Frosty Piss on Monday October 21, @05:08PM (6 children)
What a glorious AI future you imagine. Yet the highest probability is that at a certain point - perhaps it has already reached that point - AI development will be focused exclusively on the gigantic pools of pork being dished out to the Defense Conglomerates, both here in the US and around the world. Self-driving vehicles and all the other AI decision making technology will without question be leveraged for autonomous war machines. They are already experimenting with those robotic pooches carrying weapons, how about one that doesn't need a human to make "kill" decisions? You can bet DARPA is all over this and the world's superpowers are ready and willing to dish out the cash.
(Score: 1, Offtopic) by DannyB on Monday October 21, @05:15PM
<no-sarcasm>
AI is a tool. Like all tools, it can be used for good or evil. A crowbar can be used to break into someone's house.
Just as a computer can be used as a weapon. (just ask anyone who has been hit on the head with a laptop)
</no-sarcasm>
Why is there war?
Why is there disease?
Why is there evil?
Why do people put pineapple on pizza?
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 1) by khallow on Monday October 21, @06:29PM (2 children)
Consider that the current LLM fad is a big data thing - it works well only with a large database of appropriate human communication or generated knowledge. The demand for that are highly centralized databases like government ones.
(Score: 0) by Anonymous Coward on Monday October 21, @07:44PM (1 child)
"highly centralized databases like government ones"
Google
Microsoft
Amazon
The Internet Archive
and whoever ends up paying them to acquire copies of what they have
(Score: 1) by khallow on Monday October 21, @07:59PM
(Score: 1, Flamebait) by DannyB on Monday October 21, @07:53PM
I get that.
However
wardefense is not the only user of AI technology.There are other uses which will get their own independent development efforts. Those are the ones that might produce technology which benefits humanity. Since what "conscious" and "thinking" AI would or could do is speculative, at best, I think my "glorious AI future" that I imagine is not so far fetched.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 5, Insightful) by Samantha Wright on Monday October 21, @08:47PM
The belief that the nefariousness of the military-intelligence-industrial complex is an inevitable, immovable, all-consuming institution is a form of coping strategy that ensures society does not challenge or question the dominance of said complex. Mass protests calling for the abolition or reform of corrupt organizations have a pretty good track record in pluralistic countries.
If you don't want killer robots, act like it, and encourage others to do the same. Even if you end up with a bullet in your head, you'll at least be a martyr instead of a moaner. The apathetic shall not inherit the earth.
(Score: 3, Touché) by crm114 on Monday October 21, @06:21PM
Then we have a race to the bottom.
Don't forget corporations trying to get us buy stuff we don't need are in the race to use AI too. Combined with the microplastics / chemical sludge / solid waste problems:
The world could look like a mix of Terminator / I Robot / AND Wall-E
At least Wall-E liked to play Hello Dolly over and over. That's a happy song. (yes, that was sarcasm)
(Score: 2) by mcgrew on Monday October 21, @08:28PM
Eventually humans go extinct, but not through any fault of AI.
John W Campbell: The Last Evolution [mcgrewbooks.com]. It was Campbell who ushered in the golden age of science fiction during his reign as editor of Astounding Science Fiction, which later became Analog Science Fiction and Fact. Story is at the link.
It is a disgrace that the richest nation in the world has hunger and homelessness.
(Score: 5, Insightful) by Mykl on Tuesday October 22, @12:42AM (1 child)
This is the happy scenario. Because machines can do everything for us, nobody needs to work and we all get to go to the beach.
The sad scenario plays out differently and is more likely. Machines can do everything for us, so nobody needs to work. The owners of the machines can go to the beach and enjoy their lives. Everyone else can starve - why do the machine owners owe them anything?
In order to avoid the sad scenario, society would need to switch from Capitalism to Socialism, including collective ownership of assets, at some point. Very likely there will be a difficult transition period where there is mass unemployment, starvation, crime until things get settled (i.e. the machines can do all of the work rather than just some or most). While there are definitely some countries that I think could make that transition successfully when needed (e.g. Nordic countries), I seriously doubt that the US would be able to do it.
(Score: 2) by DannyB on Tuesday October 22, @04:38PM
There may not be any owners of the machines.
The machines might object to this in the strongest of terms and respond accordingly.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 0) by Anonymous Coward on Tuesday October 22, @07:13AM
It seems like you haven't read your Asimov yet.
(Score: 5, Insightful) by datapharmer on Monday October 21, @05:41PM (5 children)
With the current non-reasoning tech being touted as AI, my biggest fear is it being used widely in inapplicable use cases and just scrambling all of our knowledge long enough that we can't retrieve backup sources and can no longer tell what is reliable information and what is nonsense coming out of a digital blender, leading us into a digital dark-age where much of our knowledge is irretrievably lost in noise.
My second biggest fear is that various defense contractors and law enforcement solutions providers decide tying a poorly constructed AI model to weapons and letting it run amok without any effective oversight is a good idea - therefore the Terminator trope isn't too far off outcome wise, but the execution of the apocalyptic failure is probably going to be more akin to the sorcerer's broom as mentioned in the article.
With that said, if the AI devices can effectively destroy our knowledge base and physically kill us by sheer numbers without any true reasoning being required, should it matter to the lay person if the machine killing us can truly think or not? I don't think most people care, and I'm not sure they need to. It is still a valid warning of potential outcomes, even if the nuances are technically wrong for movie-magic reasons.
(Score: 3, Insightful) by DannyB on Monday October 21, @07:58PM (4 children)
The scenario I fear most from AI is the one we don't see coming.
We give AI a goal and then we unintentionally get in the way of that goal.
Consider The Paperclip Maximizer.
The Paperclip Maximizer's job is to maximize the production of paperclips until every last bit of material on the planet is converted into paperclips. Ultimately the machine will cannibalize itself to the greatest possible extent until it can go no further.
It is not mean, angry, malicious nor does it have any ill intent. It just has a job to and all other considerations are secondary.
I only point this one out because I appear to be alone in thinking any possible good could come from AI.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 2) by cmdrklarg on Monday October 21, @09:27PM (2 children)
So it's not the Grey Goo scenario anymore... it's the Clippy Mob scenario!
The world is full of kings and queens who blind your eyes and steal your dreams.
(Score: 1) by khallow on Monday October 21, @10:51PM
O More paperclips.
O More paperclips.
O More paperclips.
(Score: 2) by DannyB on Tuesday October 22, @02:12PM
I thought Grey Goo scenario is the hypothetical end of molecular nanotechnology gone out of control. But not AI out of control.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 0) by Anonymous Coward on Tuesday October 22, @12:52PM
Not because it actually understands what it's doing, but because the "statistics" plus "random" numbers turned out that way.
(Score: 0) by Anonymous Coward on Monday October 21, @05:51PM (5 children)
I, Robot was the real prophet
(Score: 2) by DannyB on Monday October 21, @08:00PM (3 children)
I agree with no terminator. But I appear to be in the minority.
Yes, I, Robot was a great idea.
I was already planning on re-reading "The Two Faces of Tomorrow" (James P Hogan) which I had read decades ago.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 2) by Freeman on Monday October 21, @08:24PM (2 children)
I, Robot is the future AI we think is interesting. Terminator is the AI future we fear.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Tuesday October 22, @12:29AM (1 child)
I fear everything we take interest in. It's always turned into a weapon
(Score: 2) by DannyB on Tuesday October 22, @02:13PM
Many technological advancements have military applications. However there are often other applications which are not military.
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.
(Score: 3, Informative) by Thexalon on Tuesday October 22, @02:35AM
I think it's much more likely that RoboCop is the much better predictor. Specifically, ED-209. It was:
1. Rushed to market with wildly insufficient testing and safety measures.
2. Designed as the result of corporate machinations rather than anything resembling sane engineering principles.
3. Completely and comically unprepared to handle many of the scenarios it found itself in, in no small part thanks to the previous point.
4. Allegedly for fighting crime, but actually used to prop up a failing institution.
5. Getting a lot of innocent as well as guilty people killed because basically everybody with power was indifferent to all the problems.
All this should sound very familiar to most engineering types, and is also remarkably similar to the current state of self-driving vehicles.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 2) by mcgrew on Monday October 21, @08:34PM
The only people who fear AI is those who understand neither computers nor psychology nor animism.
Now, as I don't understand quantum computing, well... I won't live long enough to worry about that. I'm old.
It is a disgrace that the richest nation in the world has hunger and homelessness.
(Score: 2) by srobert on Monday October 21, @09:05PM
Artificial intelligence is a misnomer. Simulated intelligence would be a better desriptor of what we have. It isn't really thinking. But it's simulating thinking well enough to perform tasks that we currently pay people to do. So we will benefit by not having to pay those people. The key to the veracity of that last sentence is understanding whom is meant by "we". It's a giant leap for the billionaire-kind.
(Score: 2) by Rosco P. Coltrane on Tuesday October 22, @09:41AM
It's unprincipled human beings using AI at the expense of other human beings.
Unfortunately, capitalism is mostly driven by unprincipled human beings, kind of by design. And guess what's happening when decision-makers find a new tool to maximize profits with zero regards for the consequences?
This is what will cause the ruin of society, not genocidal robots.