Two of the three so-called "godfathers of AI" are worried - though the third could not disagree more, saying such "prophecies of doom" are nonsense.
When trying to make sense of it in an interview on British television with one of the researchers who warned of an existential threat, the presenter said: "As somebody who has no experience of this... I think of the Terminator, I think of Skynet, I think of films that I've seen."
He is not alone. The organisers of the warning statement - the Centre for AI Safety (CAIS) - used Pixar's WALL-E as an example of the threats of AI.
Science fiction has always been a vehicle to guess at what the future holds. Very rarely, it gets some things right.
Using the CAIS' list of potential threats as examples, do Hollywood blockbusters have anything to tell us about AI doom?
CAIS says "enfeeblement" is when humanity "becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E".
If you need a reminder, humans in that movie were happy animals who did no work and could barely stand on their own. Robots tended to everything for them.
[...] But there is another, more insidious form of dependency that is not so far away. That is the handing over of power to a technology we may not fully understand, says Stephanie Hare, an AI ethics researcher and author of Technology Is Not Neutral.
[...] So what happens when someone has "a life-altering decision" - such as a mortgage application or prison parole - refused by AI?
Today, a human could explain why you didn't meet the criteria. But many AI systems are opaque and even the researchers who built them often don't fully understand the decision-making.
"We just feed the data in, the computer does something.... magic happens, and then an outcome happens," Dr Hare says.
The technology might be efficient, but it's arguable it should never be used in critical scenarios like policing, healthcare, or even war, she says. "If they can't explain it, it's not okay."
The true villain in the Terminator franchise isn't the killer robot played by Arnold Schwarzenegger, it's Skynet, an AI designed to defend and protect humanity. One day, it outgrew its programming and decided that mankind was the greatest threat of all - a common film trope.
We are of course a very long way from Skynet. But some think that we will eventually build an artificial generalised intelligence (AGI) which could do anything humans can but better - and perhaps even be self-aware.
[...] What we have today is on the road to becoming something more like Star Trek's shipboard computer than Skynet. "Computer, show me a list of all crew members," you might say, and our AI of today could give it to you and answer questions about the list in normal language.
[...] Another popular trope in film is not that the AI is evil - but rather, it's misguided.
In Stanley Kubrick's 2001: A Space Odyssey, we meet HAL-9000, a supercomputer which controls most of the functions of the ship Discovery, making the astronaut's lives easier - until it malfunctions.
[...] In modern AI language, misbehaving AI systems are "misaligned". Their goals do not seem to match up with the human goals.
Sometimes, that's because the instructions were not clear enough and sometimes it's because the AI is smart enough to find a shortcut.
For example, if the task for an AI is "make sure your answer and this text document match", it might decide the best path is to change the text document to an easier answer. That is not what the human intended, but it would technically be correct.
[...] "How would you know the difference between the dream world and the real world?" Morpheus asks a young Keanu Reeves in 1999's The Matrix.
The story - about how most people live their lives not realising their world is a digital fake - is a good metaphor for the current explosion of AI-generated misinformation.
Dr Hare says that, with her clients, The Matrix us a useful starting point for "conversations about misinformation, disinformation and deepfakes".
[...] "I think AI will transform a lot of sectors from the ground up, [but] we need to be super careful about rushing to make decisions based on feverish and outlandish stories where large leaps are assumed without a sense of what the bridge will look like," he warns.
(Score: 5, Insightful) by SomeGuy on Saturday June 10 2023, @01:20PM (1 child)
No they can't. At least not the ones I run in to. It is already like living in the movie "Idiocracy".
This bullshit "AI" is not that different from what happened when people started using computers. It's a black (or better yet, beige) box that magically does the thinking for you. Nobody knows how their job is done or why, they only know that they press a button and then they do what the magic computer or cell phone tells them to do, believing absolutely that it must always be right.
The big difference, with classical computer programs there is at least some level of accountability. The instructions are coded in there somewhere. The original business requirements used to build them are long gone, no longer applicable anyway, and have been sickly twisted to every manager's whim, but if the magic box kills someone, we can point a finger.
With "AI" the magic box no longer even has that. It just does whatever it wants, and a long as it works most of the time, nobody cares how or why. Those same managers will twist it do illegal/unethical things and then blame the AI sock puppet when it kills someone.
(Score: 5, Insightful) by VLM on Saturday June 10 2023, @02:05PM
My only real correction to the above is there's "competition" between models so when asked if the company should do X, Y, or Z and the managers already decided on option Z, they'll simply ask multiple language models until they get one of them to say Z and there you go.
Honestly not unlike some traditional IT consulting gigs I've been involved in. "We would like you to 'research' AWS vs Azure but wink wink nudge nudge the CEO already decided on AWS so ..." and those kind of gigs can be completely replaced by AI aside from conspicuous consumption (we paid him $200/hr so he must be correct when he coincidentally agrees with the outcome already predetermined by the CEO)