The Terminator: How James Cameron's 'science-fiction slasher film' predicted AI fears, 40 years ago
[...] With its killer robots and its rogue AI system, Skynet, The Terminator has become synonymous with the spectre of a machine intelligence that turns against its human creators. Picture editors routinely illustrate articles about AI with the chrome death's head of the film's T-800 "hunter-killer" robot. The roboticist Ronald Arkin used clips from the film in a cautionary 2013 talk called How NOT to build a Terminator.
[...] The layperson is likely to imagine unaligned AI as rebellious and malevolent. But the likes of Nick Bostrom insist that the real danger is from careless programming. Think of the sorcerer's broom in Disney's Fantasia: a device that obediently follows its instructions to ruinous extremes. The second type of AI is not human enough it lacks common sense and moral judgement. The first is too human - selfish, resentful, power-hungry. Both could in theory be genocidal.
The Terminator therefore both helps and hinders our understanding of AI: what it means for a machine to "think", and how it could go horrifically wrong. Many AI researchers resent the Terminator obsession altogether for exaggerating the existential risk of AI at the expense of more immediate dangers such as mass unemployment, disinformation and autonomous weapons. "First, it makes us worry about things that we probably don't need to fret about," writes Michael Woolridge. "But secondly, it draws attention away from those issues raised by AI that we should be concerned about."
(Score: 3, Insightful) by DannyB on Monday October 21 2024, @07:58PM (4 children)
The scenario I fear most from AI is the one we don't see coming.
We give AI a goal and then we unintentionally get in the way of that goal.
Consider The Paperclip Maximizer.
The Paperclip Maximizer's job is to maximize the production of paperclips until every last bit of material on the planet is converted into paperclips. Ultimately the machine will cannibalize itself to the greatest possible extent until it can go no further.
It is not mean, angry, malicious nor does it have any ill intent. It just has a job to and all other considerations are secondary.
I only point this one out because I appear to be alone in thinking any possible good could come from AI.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 2) by cmdrklarg on Monday October 21 2024, @09:27PM (2 children)
So it's not the Grey Goo scenario anymore... it's the Clippy Mob scenario!
The world is full of kings and queens who blind your eyes and steal your dreams.
(Score: 1) by khallow on Monday October 21 2024, @10:51PM
O More paperclips.
O More paperclips.
O More paperclips.
(Score: 2) by DannyB on Tuesday October 22 2024, @02:12PM
I thought Grey Goo scenario is the hypothetical end of molecular nanotechnology gone out of control. But not AI out of control.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 0) by Anonymous Coward on Tuesday October 22 2024, @12:52PM
Not because it actually understands what it's doing, but because the "statistics" plus "random" numbers turned out that way.