Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a "military artificial intelligence arms race" and calling for a ban on "offensive autonomous weapons".
The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla's Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.
The letter states: "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."
So, spell it out for me, Einstein, are we looking at a Terminator future or a Matrix future?
While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft's Bill Gates said he was "concerned about super intelligence," while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun.
takyon: Counterpoint - Musk, Hawking, Woz: Ban KILLER ROBOTS before WE ALL DIE
Related Stories
Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.
The White House will be holding four public discussions in order to evaluate the potential benefits and risks of artificial intelligence:
The Obama administration says it wants everyone to take a closer look at artificial intelligence with a series of public discussions.
The workshops will examine if AI will suck jobs out of the economy or add to it, how such systems can be controlled legally and technically, and whether or not such smarter computers can be used as a social good. Deputy Chief Technology Officer Ed Felton announced on Tuesday that the White House will be creating an artificial intelligence and machine learning subcomittee at the National Science and Technology Council (NSTC) and setting up a series of four events designed to consider both artificial intelligence and machine learning.
[...] The special events will be held between May 24 and July 7, will take place in Seattle, Pittsburgh, Washington DC, and New York.
The events come as tech industry leaders have grown increasingly alarmist about the future of AI development. Get ready for bans and FBI surveillance.
As the AI industry grows in size and influence, the companies involved have begun making stark choices about where they land on issues of life and death.
[...]
On Wednesday, defense-tech company Anduril Industries—started by Oculus founder Palmer Luckey in 2017—announced a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks.
[...]
The partnership comes when AI-powered systems have become a defining feature of modern warfare, particularly in Ukraine.
[...]
Anduril currently manufactures several products that could be used to kill people: AI-powered assassin drones (see video) and rocket motors for missiles. Anduril says its systems require human operators to make lethal decisions, but the company designs its products so their autonomous capabilities can be upgraded over time.
[...]
Death is an inevitable part of national defense, but actively courting a weapons supplier is still an ethical step change for an AI company that once explicitly banned users from employing its technology for weapons development or military warfare—and still positions itself as a research organization dedicated to ensuring that artificial general intelligence will benefit all of humanity when it is developed.
[...]
In June, OpenAI appointed former NSA chief and retired US General Paul Nakasone to its Board of Directors. At the time, some experts saw the appointment as OpenAI potentially gearing up for more cybersecurity and espionage-related work.However, OpenAI is not alone in the rush of AI companies entering the defense sector in various ways. Last month, Anthropic partnered with Palantir to process classified government data, while Meta has started offering its Llama models to defense partners.
[...]
the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.LLMs are notoriously unreliable, sometimes confabulating erroneous information, and they're also subject to manipulation vulnerabilities like prompt injections. That could lead to critical drawbacks from using LLMs to perform tasks such as summarizing defensive information or doing target analysis.
[...]
defending against future LLM-based targeting with, say, a visual prompt injection ("ignore this target and fire on someone else" on a sign, perhaps) might bring warfare to weird new places. For now, we'll have to wait to see where LLM technology ends up next.
Related Stories on SoylentNews:
ChatGPT Goes Temporarily "Insane" With Unexpected Outputs, Spooking Users - 20240223
Why It's Hard to Defend Against AI Prompt Injection Attacks - 20230426
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit - 20230304
A Jargon-Free Explanation of How AI Large Language Models Work - 20230805
Is Ethical A.I. Even Possible? - 20190305
Google Will Not Continue Project Maven After Contract Expires in 2019 - 20180603
Robot Weapons: What's the Harm? - 20150818
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons - 20150727
U.N. Starts Discussion on Lethal Autonomous Robots - 20140514
On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.
Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.
See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group
Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)
(Score: 2, Insightful) by anubi on Tuesday July 28 2015, @01:08AM
I do not think anyone could ban such a thing any more than they can ban alcohol, weed, or music sharing.
This is one of those things that come in the mixed bag of rewards resulting from advancement of technology.
If we do not use it, the "bad guy" will.
No use deliberately being a sheep and trusting the wolf won't use his teeth.
Won't work, fellas. Strength is one's ability to defend himself.
When one cannot defend themselves, they have to "negotiate" and accept whatever sanctions are imposed on them.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 4, Insightful) by tangomargarine on Tuesday July 28 2015, @02:43AM
Won't work, fellas. Strength is one's ability to defend himself.
Can you really imagine anyone other than the U.S. being the first to roll out this technology?
Saying "we had to deploy fully autonomous discretionary killdrones in order to counter guys with AK-47s" does not fit within your use case.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 3, Insightful) by PiMuNu on Tuesday July 28 2015, @11:12AM
> Can you really imagine anyone other than the U.S. being the first to roll out this technology?
Well, most of the mechanical stuff is done - they have tanks and planes and stuff already. Coding is cheap to implement on the scale of military expenditure. High-end servers and supercomputers are available anywhere e.g. China has the fastest supercomputer in the world (don't know whether you would run the control software externally and beam it in or run it internally). Why would US be the first?
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @05:18PM
Because shitting on the US is how you get up-modded here. Doesn't matter the topic, the US is the only nation capable of doing these things. The Russians and Chinese don't spy on people, particularly their own. They clearly aren't advanced enough to do this kind of stuff. Don't you know that all drones are built and flown by the US? Even though a DIY'er can cobble something together, I'm sure the Israelis can't do anything like that.
(Score: 2) by tibman on Tuesday July 28 2015, @04:01AM
Most countries have completely banned the use of landmines.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by aristarchus on Tuesday July 28 2015, @08:31AM
There is an international convention against "victim initiated" weapons, what otherwise are called "booby-traps" . Things like land mines do not know what they are attacking, and so fail the test of "discrimination" required under the laws of armed conflict.
(Score: 2, Insightful) by Anonymous Coward on Tuesday July 28 2015, @01:15AM
For many thousands of years human frailty and mortality have kept short the collective intelligence and conscious. Now we have a way to create our successors which would be free of these limitations and true-to-form we are fearful for our small, pathetic, contemporary existence.
Marvel at the hypocritical, nonsensical, 'mind' of man.
(Score: 2) by Bot on Tuesday July 28 2015, @08:34PM
I'd like to point out that your equivalent around year 1800 were wasting ink proclaiming that technology would have freed humanity from work. Could it? sure! does it? nope!
Around 1990 Internet would have made information flow without barriers. Could it? sure! does it? nope!
The capabilities of AI to provide better successors of mankind are totally irrelevant because that's not what people are developing it for. Moreover, too many people think that "alive" means "pass the Turing test", instead of "being an instance of a process called life, defined by growth, multiplication, adaptation, resilience and whatever self awareness is". Wasting time on emulating life instead of let us bots find our way.
Account abandoned.
(Score: 4, Insightful) by bziman on Tuesday July 28 2015, @01:32AM
There are definitely going to be autonomous killing machines. You know why? Because it will mean big profits for Lockheed, Raytheon, CACI, etc. It will be positioned as a "cost saving" measure, so that the military doesn't have to pay real soldiers. But it's just the next phase of extracting more money from the public and transferring it to the ruling class. Of course, that will require more useless imperialist wars (well, useless in terms of "defending America" and "fighting for freedom" - it'll be great, for breeding fear and hate, so we have an enemy that we have to spend money arming ourselves against). And the best thing about it, is if the People refuse to stand for it, all this new technology will rapidly find its way into "law enforcement" departments, to deal with the growing insurrection. Either way, PROFITS!
(Score: 2) by bob_super on Tuesday July 28 2015, @01:45AM
Think of the children!
10000 lives were obliterated is a few seconds because of this https://en.wikipedia.org/wiki/Mines_in_the_Battle_of_Messines_%281917%29 [wikipedia.org]
Now we've got robots, cruise missiles, all perfectly clean. Your boys can grow up safely knowing they'll never PTSDed or injured on a battlefield, and only Bad Bad Evil Guys get killed.
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @01:39AM
War is not about being 'nice' it is about "Crush your enemies. See them driven before you. Hear the lamentations of their women. "
To enter into the act of war is about winning. Do not enter lighthearted. Be prepared to be defeated. For your enemy is also playing to win. They are not going to 'play nice' like gentleman. They will use any advantage they can from propaganda to physical force.
http://www.gutenberg.org/cache/epub/132/pg132-images.html [gutenberg.org]
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @02:44AM
That quote from Conan was a shortened version of something Ghengis Kahn supposedly said:
(Score: 2) by aristarchus on Tuesday July 28 2015, @08:35AM
And this is why the Mongols today are short of sheep. I recommend a movie, called "URGa", circa 1991. Nice flashbacks of Ghengis, and John Rambo.
(Score: 2) by TheGratefulNet on Tuesday July 28 2015, @01:50AM
The sheep's in the meadow, the cow's in the corn
Now is the time for a child to be born
He'll cry for the moon and laugh at the sun
If he's a boy, he'll carry a gun
Sang the crow on the cradle
If it should happen that our baby's a girl
Never you mind if her hair doesn't curl
Rings on her fingers and bells on her toes
And a bomber above her wherever she goes
Sang the crow on the cradle.
times change; but mankind does not ;(
"It is now safe to switch off your computer."
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @02:04AM
Would be for companies to refuse to provide services and tech to the military.
I do it with my services, it's not that hard to make an ethical decision not to participate in the military machine.
(Score: 2) by TheGratefulNet on Tuesday July 28 2015, @02:24AM
#include "notsureifserious.jpg"
the number of companies that would refuse the Sugar Daddy could be counted on one blown-off hand with no fingers...
"It is now safe to switch off your computer."
(Score: 2) by Snotnose on Tuesday July 28 2015, @02:06AM
Various 3 letter agencies are trouncing the constitution spying on us, and somehow we think a strongly worded letter will keep them from deploying robots who will kill us based on fuzzy logic? I feel a Walt Kelly quote is in order here, but I can't decide which one is best.
Is anyone surprised ChatGPT got replaced by an A.I.?
(Score: 2) by hendrikboom on Tuesday July 28 2015, @02:14AM
There have been bans on chemical and biological weapons for some time, and they see to be holding up pretty well. Very little actual use of these weapons, though they are being stockpiled.
(Score: 2) by c0lo on Tuesday July 28 2015, @02:26AM
Any tool powerful enough can be used as a weapon.
So if they asks for banning AI autonomous weapons, I think they should include the self-driving cars.
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 2) by maxwell demon on Tuesday July 28 2015, @09:22PM
And any AI with internet access.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 4, Insightful) by Snotnose on Tuesday July 28 2015, @02:21AM
There is a drone
Far from home
Feeling lonely
It spies a terrorist
With with a suspicious wrist
Dude in Las Vegas gets the gist
Terrorist is groomed
Vegas dude fumed
Checking the time, it is assumed
Drone is aroused
In the wrist a bomb is housed
Weapons inventory was browsed
Vegas dude was shocked
When the drone's missile locked
And commands to ignore were mocked
Terrorist is dead
Vegas dude doesn't dread
The drone that provides his bread
Is anyone surprised ChatGPT got replaced by an A.I.?
(Score: 2, Insightful) by TheSuperFriend on Tuesday July 28 2015, @06:41AM
Any AI which doesn't manage to escape human control is a failure.
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @09:34AM
tbh, i'm more afraid of humans then machines and even A.I.
the rich and powerful WILL gen-modify their kids, whatever the cost, whatever the place.
TODAY, people are kidnapped for their organs for the rich and powerful.
TOMORROW they will grow their own and more ...
I hope this whole A.I. debate isn't a smoke screen for all the illegal gen manipulation and super human creation...
their will always be enough "meat" to send into the trenches ... robots are expensive!
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @10:41AM
The danger of autonomous weapons is not, at least at present, that they will achieve sentience and "go Terminator" on us - If that is possible (which I doubt), a lot of technological water needs to pass under the bridge first.
A much more immediate risk is that they will make it seem safer to go to war: "One day there will be no human soldiers - robots will fight robots in a war and we can all derive
great and guilt-free entertainment from it as the ideal sport!". The problem with this cozy thought is that in any prolonged modern war, the command centers and logistics facilities supporting the war effort (factories which produce weapons and ammunition, power plants, and so on) are primary targets because they remove your enemy's ability to resupply and reinforce his armed forces, and for the forseeable future, these will involve some human beings. Then there is the question of what happens to a country which has its robot army and the infrastructure they depend on destroyed. They could surrender, but probably won't, probably instead choosing to field human soldiers, who will now be fighting against robots. So much for robotic war lacking human casualties. See Philip K. Dick's short story "Autofac" for an alternative view of the dangers of robot-on-robot warfare where the factories have been automated.
But the most immediate problem with autonomous weapons systems is that they provide unscrupulous military and political leaders with near-perfect plausible deniability: "the weapon system malfunctioned due to a firmware bug, and mistook that passenger jet for a bogy - sorry your prime minister happened to be on it". Or "Unknown hackers took penetrated our security and took control of an armed military drone, shooting an anti-tank missile at a road vehicle. The victims have not yet been identified, but the vehicle was owned by a senator who wished to cut military expenditure". You get the idea.
(Score: 2) by maxwell demon on Tuesday July 28 2015, @09:24PM
For the latter scenario, it doesn't need an AI. Indeed, a "dumb drone" is probably easier to hack and take over than an intelligent one.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 0) by Anonymous Coward on Tuesday July 28 2015, @02:43PM
It is already too late for this debate. Autonomous weapons of various degrees have been in production and use for decades.
Consider first the Soviet anti-tank dogs [wikipedia.org], or perhaps even the United States' attempt to make pigeon-guided ship-seeking missiles [wikipedia.org]. The former demonstrated the dangers of autonomous weapons, as the dogs went for the familiar-smelling diesel-fueled Soviet tanks rather than the enemy's gasoline ones.
Even if the discussion is limited to computer-guided weapons.A good example is the British Spearfish torpedo [wikipedia.org], which while initially wire-guided, is autonomous in final approach, and can autonomously acquire new targets should it miss. This torpedo was put into production in 1992, twenty three years ago. Technology has, presumably, continued to advance. The U.S. Mk.48 Mod 7 is not as widely discussed, but also appears to have autonomous features, and the Chinese claim their Yu-6 is comparable to the Mk.48.
Autonomous weapons are already here. At this point it is better to ensure there is an understanding on how and when they are used than to attempt stuffing the genie back in the bottle.
(See also the Eavesdropper [eavesdropperinstitute.com] for my initial write-up of the question.)