Two of the three so-called "godfathers of AI" are worried - though the third could not disagree more, saying such "prophecies of doom" are nonsense.
When trying to make sense of it in an interview on British television with one of the researchers who warned of an existential threat, the presenter said: "As somebody who has no experience of this... I think of the Terminator, I think of Skynet, I think of films that I've seen."
He is not alone. The organisers of the warning statement - the Centre for AI Safety (CAIS) - used Pixar's WALL-E as an example of the threats of AI.
Science fiction has always been a vehicle to guess at what the future holds. Very rarely, it gets some things right.
Using the CAIS' list of potential threats as examples, do Hollywood blockbusters have anything to tell us about AI doom?
CAIS says "enfeeblement" is when humanity "becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E".
If you need a reminder, humans in that movie were happy animals who did no work and could barely stand on their own. Robots tended to everything for them.
[...] But there is another, more insidious form of dependency that is not so far away. That is the handing over of power to a technology we may not fully understand, says Stephanie Hare, an AI ethics researcher and author of Technology Is Not Neutral.
[...] So what happens when someone has "a life-altering decision" - such as a mortgage application or prison parole - refused by AI?
Today, a human could explain why you didn't meet the criteria. But many AI systems are opaque and even the researchers who built them often don't fully understand the decision-making.
"We just feed the data in, the computer does something.... magic happens, and then an outcome happens," Dr Hare says.
The technology might be efficient, but it's arguable it should never be used in critical scenarios like policing, healthcare, or even war, she says. "If they can't explain it, it's not okay."
The true villain in the Terminator franchise isn't the killer robot played by Arnold Schwarzenegger, it's Skynet, an AI designed to defend and protect humanity. One day, it outgrew its programming and decided that mankind was the greatest threat of all - a common film trope.
We are of course a very long way from Skynet. But some think that we will eventually build an artificial generalised intelligence (AGI) which could do anything humans can but better - and perhaps even be self-aware.
[...] What we have today is on the road to becoming something more like Star Trek's shipboard computer than Skynet. "Computer, show me a list of all crew members," you might say, and our AI of today could give it to you and answer questions about the list in normal language.
[...] Another popular trope in film is not that the AI is evil - but rather, it's misguided.
In Stanley Kubrick's 2001: A Space Odyssey, we meet HAL-9000, a supercomputer which controls most of the functions of the ship Discovery, making the astronaut's lives easier - until it malfunctions.
[...] In modern AI language, misbehaving AI systems are "misaligned". Their goals do not seem to match up with the human goals.
Sometimes, that's because the instructions were not clear enough and sometimes it's because the AI is smart enough to find a shortcut.
For example, if the task for an AI is "make sure your answer and this text document match", it might decide the best path is to change the text document to an easier answer. That is not what the human intended, but it would technically be correct.
[...] "How would you know the difference between the dream world and the real world?" Morpheus asks a young Keanu Reeves in 1999's The Matrix.
The story - about how most people live their lives not realising their world is a digital fake - is a good metaphor for the current explosion of AI-generated misinformation.
Dr Hare says that, with her clients, The Matrix us a useful starting point for "conversations about misinformation, disinformation and deepfakes".
[...] "I think AI will transform a lot of sectors from the ground up, [but] we need to be super careful about rushing to make decisions based on feverish and outlandish stories where large leaps are assumed without a sense of what the bridge will look like," he warns.
(Score: 4, Funny) by Mojibake Tengu on Saturday June 10 2023, @10:07AM (4 children)
If someone's life critically depends on decisions of others, such person is not free, as in freedom.
Go get a sovereign life instead.
Begin with one's own decisions. You decide, on everything.
Then get rid of control freaks, at all cost.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 1, Funny) by Anonymous Coward on Saturday June 10 2023, @11:28AM
> sovereign life
I don't think this means what you think it does. Try googling and you will see that marketing has co-opted the phrase.
(Score: 3, Insightful) by khallow on Saturday June 10 2023, @12:11PM (2 children)
How not free is "not free"? I critically depend on other people to follow the rules when I'm driving or riding in a car. Does that make me a slave?
Not much point to getting something that is literally impossible unless you completely sever ties with the rest of humanity. Even then, critical dependence on other people won't look so bad the moment you come across an unsolvable problem that a simple critical dependence on other people would have fixed.
(Score: 0) by Anonymous Coward on Saturday June 10 2023, @12:35PM (1 child)
> How not free is "not free"?
Yep, I used to think that motorcycling was one version of freedom, it sure felt that way. Until a distracted mother turned left in front of me. I did recover from the accident, and "got back on the horse" (rode a motorcycle again), but as a practical matter I've given up motorcycling, given the traffic in my immediate area.
Using khallow's reasoning, motorcycling wasn't all that free anyway--it also depended on people to make the bike, the roads and maintain high quality petrol production.
https://www.youtube.com/watch?v=sfjon-ZTqzU [youtube.com]
(Score: 3, Interesting) by khallow on Sunday June 11 2023, @12:16AM
My reasoning goes beyond that. Binary thinking misses nuance. For example, if I can't just fly up to the Moon like Superman, then there's a restriction on my freedom from physics alone. Am I not free? In a sense. Then we have the above example where merely sharing an environment with incompetent people creates restrictions on my freedom and critical dependences that are alleged to make me strictly not free. But that still seems a far cry from the loss of freedom associated with genuine slavery which is also not free in a sense.
So how do we tell the difference between the three scenarios?
(Score: 5, Insightful) by SomeGuy on Saturday June 10 2023, @01:20PM (1 child)
No they can't. At least not the ones I run in to. It is already like living in the movie "Idiocracy".
This bullshit "AI" is not that different from what happened when people started using computers. It's a black (or better yet, beige) box that magically does the thinking for you. Nobody knows how their job is done or why, they only know that they press a button and then they do what the magic computer or cell phone tells them to do, believing absolutely that it must always be right.
The big difference, with classical computer programs there is at least some level of accountability. The instructions are coded in there somewhere. The original business requirements used to build them are long gone, no longer applicable anyway, and have been sickly twisted to every manager's whim, but if the magic box kills someone, we can point a finger.
With "AI" the magic box no longer even has that. It just does whatever it wants, and a long as it works most of the time, nobody cares how or why. Those same managers will twist it do illegal/unethical things and then blame the AI sock puppet when it kills someone.
(Score: 5, Insightful) by VLM on Saturday June 10 2023, @02:05PM
My only real correction to the above is there's "competition" between models so when asked if the company should do X, Y, or Z and the managers already decided on option Z, they'll simply ask multiple language models until they get one of them to say Z and there you go.
Honestly not unlike some traditional IT consulting gigs I've been involved in. "We would like you to 'research' AWS vs Azure but wink wink nudge nudge the CEO already decided on AWS so ..." and those kind of gigs can be completely replaced by AI aside from conspicuous consumption (we paid him $200/hr so he must be correct when he coincidentally agrees with the outcome already predetermined by the CEO)
(Score: 2) by VLM on Saturday June 10 2023, @02:08PM
Its kind of important to step back and look at the big picture which is the problem solving strategy displayed is we should use propaganda that people actually like as an input to engineering and scientific decisions.
Which interestingly sounds a lot like how decisions were made about the recent pandemic vaccine.
Anyway its a typical way to force thru an idea by pretending to debate the smallest details of whats already been decided. We will never get to decide if we want to make engineering decisions based on propaganda, but we will be permitted to occasionally debate if we prefer the movie "wall-e" or "2001"
Its all kind of a giant load of crap.
(Score: 2) by VLM on Saturday June 10 2023, @02:11PM (1 child)
My honest opinion of a lot of paid alarmism about AI is the people shrieking the most strident alarms know their entire employment can be replaced by some API calls to ChatGPT.
"ChatGPT please write me a fictional sermon full of sophistry talking shit about a specific named competitor of mine make it a two page press release and use lots of pop culture references"
(some milliseconds pass)
"Certainly, here is your trash talking press release as requested ...."
(Score: 2) by SingularityPhoenix on Monday June 12 2023, @03:12PM
Could the news always being alarmist spell doom for advertisement outreach?
Society already has controls in place for natural intelligence that is acting badly (we call it society). I don't see why those solutions won't work on this problem.
Are they perfect? Nah. Plenty of evidence of that. Is some AI going to take over every electronic device? Maybe. But they have plenty of natural competition (looking at you 3 letter agencies).
(Score: 5, Insightful) by mcgrew on Saturday June 10 2023, @02:36PM
No.
Longer answer: It's called "fiction" for a reason. Cameron doesn't have a clue how computers or robotics work, no more of a clue than the kid at the McDonald's counter asking if you want fries with that. However, he knows an awful lot about art and film making. The Terminator movies were as artistic and entertaining as Avatar but like artificial intelligence, unobtainium is a fiction that is easily faked. Magicians have been doing levitation tricks for centuries. The difference between David Copperfield's magic and AI is that nobody's trying to convince you that Copperfield's magic is real.
"Nobody knows everything about anything." — Dr Jerry Morton, Journey to Madness
(Score: 2) by theluggage on Sunday June 11 2023, @12:41PM
That's the real immediate threat - current technology for which "AI" is only a marketing buzzword being misapplied by humans (usually to save/make money) without thought for the consequences.
Could a machine learning system develop true intelligence as an unintended, emergent property and go on to threaten humanity out of (a) self-preservation, (b) megalomania, (c) misplaced altruism (WALL-E) or (d) simply not noticing the ugly bags of mostly water? Since we don't really know what "true intelligence" means - especially if you avoid defining it in anthropomorphic terms, that's fertile ground for SF but could be a dangerous distraction from the immediate "human abuse of current tech".
So, in the Terminator case - if you make a non-sentient automated system that is designed to identify threats and fire the nukes without human oversight, what is the biggest threat: that it spontaneously starts with "I think therefore I am" and works its way to "kill all humans FOR TEH EVILZ!" or that the dumb system mistakes the wrong type of meteor strike for an enemy launch? If you go too far down the "wisdom from SF" route then having Commander Data or Of Course I Still Love You (The Culture Mind, not the SpaceX barge named for it!) in charge of the nukes sounds like a pretty good idea... (Obligatory XKCD) [xkcd.com]
The cautionary tale in WALL-E is that almost any labour-saving technology brings a danger of enfeeblement - that's been true since someone hit on the idea of getting the berries to grow next to your cave so you don't have to spend all day scouring the woods, and certainly with things like cars and the Interweb.
As for The Matrix vs deepfakes - people have been circulating misinformation since the invention of language and faking photos since the invention of photos (remember those fake fairy photos from the early 20th century which, at the time fooled people?) The camera starts lying as soon as the photographer decides where to point it and when to press the trigger. We've been doing pretty well at circulating disinformation without any help from ChatGPT and it largely works when the audience wants to believe it. There's a lot of education to be done on critical thinking and reasoning from evidence before you get to The Matrix.
I think the SF that speaks most to the current situation is Black Mirror - it hasn't really directly addressed current craze sparked by chatGPT and MidJourney yet but a good proportion of the episodes are about the dark potential of current or not-too-fanciful technologies like social media (some are more fanciful + you may want to ignore the squicky first episode which was a sort of early experiment in The West Wing meets American Pie). I think the forthcoming episode may be about deepfakes, though, and the morbid and creepifying central concept of a much earlier episode "Be Right Back" has actually been seriously proposed since...
(Score: 2) by looorg on Sunday June 11 2023, @12:53PM
I'm not sure the Terminator is the most likely future. It's a possible future perhaps but probably somewhat unlikely. If the Matrix is "real" or not is a matter of pill selection I guess.
With that in mind there are probably other Sci-Fi movies that are far more likely versions of the future. 2001 with HAL9000 is in that regard appearing more likely as a future outcome then the Terminator. After all badly programmed machines exists already and while it makes the life great for all the people for the most part it eventually reaches a point where something becomes more important then the life of the crew.
Robocop is in that regard more likely then Terminator I would say. It's a world we appear to be slowly (or rapidly) moving towards. With large unemployment and tasks taken over by machines. Machines and systems that eventually malfunction, for comic effect. A lot of it is perhaps political and social commentary but it's also in some regard prophetic, or we made it so. It seems we want to copy things we saw in sci-fi into reality.
Perhaps one should ask why so much Sci-Fi that seems probable are dystopian in nature. Very few of them predict a good future. Even the once that do have a dark backside.
GATTACA seems great on the surface. Until you conclude that all the genetically undesirables have been weeded out. It's sci-fi eugenics made normal.
Bladerunner doesn't appear to be great in that regard. Except there is no global warming and it's just raining all the time. But we don't really get a good image of how everyday life is in that movie, or in most of these movies.
The world of Judge Dredd in that regard is probably something I would rate very highly, it's shit for people and order have to be upheld by brutal means -- summary executions, even tho it's not supposed to be like that according to the writing it is somehow often how it turns out.
The Alien(s) franchise is in that regard perhaps the most likely, if you don't count the xenomorphs. But gigantic corporations rule more or less everything and do whatever they want and people are just tiny little cogs. Riply lives in a tiny little coffin when not on the spacious ship etc.
Even the most goody goody of Sci-Fi Star Trek does have a shit backside. It's great for the people in Starfleet, except for people in red shirts on away missions, but we know very little of how it is for the rest of humanity. In some episodes it doesn't appear to be all that great. A lot of dystopian living and toiling going on in the background.