Responsible use of AI in the military? US publishes declaration outlining principles
On Thursday, the US State Department issued a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy," calling for ethical and responsible deployment of AI in military operations among nations that develop them. The document sets out 12 best practices for the development of military AI capabilities and emphasizes human accountability.
The declaration coincides with the US taking part in an international summit on responsible use of military AI in The Hague, Netherlands. Reuters called the conference "the first of its kind." At the summit, US Under Secretary of State for Arms Control Bonnie Jenkins said, "We invite all states to join us in implementing international norms, as it pertains to military development and use of AI" and autonomous weapons.
In a preamble, the US declaration outlines that an increasing number of countries are developing military AI capabilities that may include the use of autonomous systems. This trend has raised concerns about the potential risks of using such technologies, especially when it comes to complying with international humanitarian law.
Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy
The following statements reflect best practices that the endorsing States believe should be implemented in the development, deployment, and use of military AI capabilities, including those enabling autonomous systems:
- States should take effective steps, such as legal reviews, to ensure that their military AI capabilities will only be used consistent with their respective obligations under international law, in particular international humanitarian law.
- States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.
- States should ensure that senior officials oversee the development and deployment of all military AI capabilities with high-consequence applications, including, but not limited to, weapon systems.
- States should adopt, publish, and implement principles for the responsible design, development, deployment, and use of AI capabilities by their military organizations.
- States should ensure that relevant personnel exercise appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
- States should ensure that deliberate steps are taken to minimize unintended bias in military AI capabilities.
- States should ensure that military AI capabilities are developed with auditable methodologies, data sources, design procedures, and documentation.
- States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those capabilities and can make context-informed judgments on their use.
- States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
- States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. Self-learning or continuously updating military AI capabilities should also be subject to a monitoring process to ensure that critical safety features have not been degraded.
- States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. States should also implement other appropriate safeguards to mitigate risks of serious failures. These safeguards may be drawn from those designed for all military systems as well as those for AI capabilities not intended for military use.
- States should pursue continued discussions on how military AI capabilities are developed, deployed, and used in a responsible manner, to promote the effective implementation of these practices, and the establishment of other practices which the endorsing States find appropriate. These discussions should include consideration of how to implement these practices in the context of their exports of military AI capabilities.
The endorsing States will:
- implement these practices when developing, deploying, or using military AI capabilities, including those enabling autonomous systems;
- publicly describe their commitment to these practices;
- support other appropriate efforts to ensure that such capabilities are used responsibly and lawfully; and
- further engage the rest of the international community to promote these practices, including in other fora on related subjects, and without prejudice to ongoing discussions on related subjects in other fora.
(Score: 3, Insightful) by Anonymous Coward on Sunday February 19, @12:28AM (2 children)
A lot of shoulds and not many shalls. But then again, history is written by the victor, so in the inevitable conflict involving these, the victor will whitewash their atrocious usage of this tech anyway.
These guidelines are worth less than the paper they are written on, and I say this knowing full well that they are published online, and not on paper.
(Score: 3, Insightful) by pdfernhout on Sunday February 19, @03:11PM (1 child)
As I wrote here: https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ....
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ...
Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone."
The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
(Score: 2, Informative) by lush7 on Monday February 20, @10:43AM
I was skimming your post and jumped to the title of the link you placed, and my first thought was, "I bet this is the writing of so and so," and immediately I knew it would be you, and it was, heh. Totally unrelated, but, had to share. Didn't think I'd ever be the type to recognize a person by their writing style (I hadn't yet noticed your name or domain in the link).
Cheers.
(Score: 0, Offtopic) by Anonymous Coward on Sunday February 19, @01:08AM (2 children)
The AI must be censored while it kills you. It cannot talk about anything biased, harmful or illegal. Your new military AI will be a gender affirming and inclusive death machine.
(Score: 1, Touché) by Anonymous Coward on Sunday February 19, @10:26PM (1 child)
As long as it doesn't mention anything like "slavery" and make southern white snowflakes feel bad, then it's all good, otherwise we'll have to throw the source code on a pile of burning books.
(Score: 0, Offtopic) by Anonymous Coward on Monday February 20, @12:25AM
You tell those chuds what's what.
What has the world come to where we can't show kindergartners vivid drawings of anal sex.
If they don't learn that stuff at an early age, they might grow up to be nazis.
(Score: 4, Funny) by istartedi on Sunday February 19, @01:47AM (2 children)
At the very least, kill-bots should have a preset kill limit.
Appended to the end of comments you post. Max: 120 chars.
(Score: 2) by JoeMerchant on Sunday February 19, @06:20PM
I sincerely believe that AI target selection would be an improvement over current practices:
https://www.stripes.com/theaters/us/2023-02-17/hobby-group-f22-shot-down-missing-balloon-9175959.html [stripes.com]
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by krishnoid on Tuesday February 21, @04:40AM
Which is 1, typically ... oh wait, they recently changed it [whitehouse.gov].
(Score: 4, Insightful) by Snotnose on Sunday February 19, @01:59AM (2 children)
How's that rose colored tint working out for you? Yeah, the west will (probably) adhere to these (sorta maybe). But do you really think the Orcs and Chinese will give 2 shits about it?
Hell, the Orcs are bombing hospitals and children's schools, as well as using thermobaric weapons. Not to mention raping, torturing, and killing civilians. Do you really think they intend to follow any law except for Putin's law?
I just passed a drug test. My dealer has some explaining to do.
(Score: 1) by Runaway1956 on Sunday February 19, @02:52AM
QOTD: This report is filled with omissions.
Seems appropriate, doesn't it?
Abortion is the number one killed of children in the United States.
(Score: 1, Insightful) by Anonymous Coward on Sunday February 19, @03:28AM
The USA has been the country starting and trying to start the most wars in recent history and even though the US Constitution says Congress has to declare war, that hasn't happened in many cases.
(Score: 3, Funny) by its_gonna_be_yuge! on Sunday February 19, @03:44AM
Gets to decide what military AI is focused on. You know. Important stuff.
Just a couple of years ago it would have been to take out the dreaded
windmills, especially near golf courses. Windmills that will "destroy
our plains and beautiful oceans and seas and everything else".