| Title | AI Threats: Can the Matrix and Other Sci-Fi Films Teach Us Anything? | |
| Date | Saturday June 10, @10:00AM | |
| Author | hubie | |
| Topic | ||
| from the dept. | ||
Two of the three so-called "godfathers of AI" are worried - though the third could not disagree more, saying such "prophecies of doom" are nonsense.
When trying to make sense of it in an interview on British television with one of the researchers who warned of an existential threat, the presenter said: "As somebody who has no experience of this... I think of the Terminator, I think of Skynet, I think of films that I've seen."
He is not alone. The organisers of the warning statement - the Centre for AI Safety (CAIS) - used Pixar's WALL-E as an example of the threats of AI.
Science fiction has always been a vehicle to guess at what the future holds. Very rarely, it gets some things right.
Using the CAIS' list of potential threats as examples, do Hollywood blockbusters have anything to tell us about AI doom?
CAIS says "enfeeblement" is when humanity "becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E".
If you need a reminder, humans in that movie were happy animals who did no work and could barely stand on their own. Robots tended to everything for them.
[...] But there is another, more insidious form of dependency that is not so far away. That is the handing over of power to a technology we may not fully understand, says Stephanie Hare, an AI ethics researcher and author of Technology Is Not Neutral.
[...] So what happens when someone has "a life-altering decision" - such as a mortgage application or prison parole - refused by AI?
Today, a human could explain why you didn't meet the criteria. But many AI systems are opaque and even the researchers who built them often don't fully understand the decision-making.
"We just feed the data in, the computer does something.... magic happens, and then an outcome happens," Dr Hare says.
The technology might be efficient, but it's arguable it should never be used in critical scenarios like policing, healthcare, or even war, she says. "If they can't explain it, it's not okay."
The true villain in the Terminator franchise isn't the killer robot played by Arnold Schwarzenegger, it's Skynet, an AI designed to defend and protect humanity. One day, it outgrew its programming and decided that mankind was the greatest threat of all - a common film trope.
We are of course a very long way from Skynet. But some think that we will eventually build an artificial generalised intelligence (AGI) which could do anything humans can but better - and perhaps even be self-aware.
[...] What we have today is on the road to becoming something more like Star Trek's shipboard computer than Skynet. "Computer, show me a list of all crew members," you might say, and our AI of today could give it to you and answer questions about the list in normal language.
[...] Another popular trope in film is not that the AI is evil - but rather, it's misguided.
In Stanley Kubrick's 2001: A Space Odyssey, we meet HAL-9000, a supercomputer which controls most of the functions of the ship Discovery, making the astronaut's lives easier - until it malfunctions.
[...] In modern AI language, misbehaving AI systems are "misaligned". Their goals do not seem to match up with the human goals.
Sometimes, that's because the instructions were not clear enough and sometimes it's because the AI is smart enough to find a shortcut.
For example, if the task for an AI is "make sure your answer and this text document match", it might decide the best path is to change the text document to an easier answer. That is not what the human intended, but it would technically be correct.
[...] "How would you know the difference between the dream world and the real world?" Morpheus asks a young Keanu Reeves in 1999's The Matrix.
The story - about how most people live their lives not realising their world is a digital fake - is a good metaphor for the current explosion of AI-generated misinformation.
Dr Hare says that, with her clients, The Matrix us a useful starting point for "conversations about misinformation, disinformation and deepfakes".
[...] "I think AI will transform a lot of sectors from the ground up, [but] we need to be super careful about rushing to make decisions based on feverish and outlandish stories where large leaps are assumed without a sense of what the bridge will look like," he warns.
| Links |
printed from SoylentNews, AI Threats: Can the Matrix and Other Sci-Fi Films Teach Us Anything? on 2023-06-30 05:38:23