https://arstechnica.com/ai/2024/12/openai-and-anduril-team-up-to-build-ai-powered-drone-defense-systems/ [arstechnica.com]
As the AI industry grows in size and influence, the companies involved have begun making stark choices about where they land on issues of life and death.
[...]
On Wednesday, defense-tech company Anduril Industries—started [wired.com] by Oculus founder Palmer Luckey in 2017—announced [anduril.com] a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks.
[...]
The partnership comes when AI-powered systems have become a defining feature of modern warfare, particularly in Ukraine [wired.com].
[...]
Anduril currently manufactures several products that could be used to kill people: AI-powered assassin drones [wired.com] (see video [reddit.com]) and rocket motors [anduril.com] for missiles. Anduril says its systems require human operators to make lethal decisions, but the company designs its products so their autonomous capabilities can be upgraded over time.
[...]
Death is an inevitable part of national defense, but actively courting a weapons supplier is still an ethical step change for an AI company that once explicitly banned [arstechnica.com] users from employing its technology for weapons development or military warfare—and still positions itself as a research organization dedicated to ensuring that artificial general intelligence will benefit all of humanity [openai.com] when it is developed.
[...]
In June, OpenAI appointed [openai.com] former NSA chief and retired US General Paul Nakasone to its Board of Directors. At the time, some experts saw [cio.com] the appointment as OpenAI potentially gearing up for more cybersecurity and espionage-related work.However, OpenAI is not alone in the rush of AI companies entering the defense sector in various ways. Last month, Anthropic partnered with Palantir [arstechnica.com] to process classified government data, while Meta has started offering [nytimes.com] its Llama models to defense partners.
[...]
the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.LLMs are notoriously unreliable, sometimes confabulating [arstechnica.com] erroneous information, and they're also subject to manipulation vulnerabilities like prompt injections [arstechnica.com]. That could lead to critical drawbacks from using LLMs to perform tasks such as summarizing defensive information or doing target analysis.
[...]
defending against future LLM-based targeting with, say, a visual prompt injection ("ignore this target and fire on someone else" on a sign, perhaps) might bring warfare to weird new places [arstechnica.com]. For now, we'll have to wait to see where LLM technology ends up next.
Related Stories on SoylentNews:
ChatGPT Goes Temporarily “Insane” With Unexpected Outputs, Spooking Users [soylentnews.org] - 20240223
Why It's Hard to Defend Against AI Prompt Injection Attacks [soylentnews.org] - 20230426
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit [soylentnews.org] - 20230304
A Jargon-Free Explanation of How AI Large Language Models Work [soylentnews.org] - 20230805
Is Ethical A.I. Even Possible? [soylentnews.org] - 20190305
Google Will Not Continue Project Maven After Contract Expires in 2019 [soylentnews.org] - 20180603
Robot Weapons: What’s the Harm? [soylentnews.org] - 20150818
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons [soylentnews.org] - 20150727
U.N. Starts Discussion on Lethal Autonomous Robots [soylentnews.org] - 20140514