Governments also have no theory on how nefarious groups might behave using the tech:
The proliferation of AI in weapon systems among non-state actors such as terrorist groups or mercenaries would be virtually impossible to stop, according to a hearing before UK Parliament.
The House of Lords' AI in Weapon Systems Committee yesterday heard how the software nature of AI models that may be used in a military context made them difficult to contain and keep out of nefarious hands.
When we talk about non-state actors that conjures images of violent extremist organizations, but it should include large multinational corporations, which are very much at the forefront of developing this technology
Speaking to the committee, James Black, assistant director of defense and security research group RAND Europe, said: "A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it's missiles, it's engines, it's nuclear materials."
An added uncertainty was that there was no established "war game" theory of how hostile non-state actors might behave using AI-based weapons.
[...] Black said: "On the question about escalation: in general, we don't have particularly good theory for understanding how to deter non-state actors. A lot of the deterrence theory [has] evolved out of Cold War nuclear deterrence in the USSR, USA and the West. It is not really configured the same way to think about non-state actors, particularly those which have very decentralized, loose non-hierarchical network command structures, which don't lend themselves to influencing in the same way as a traditional top-down military adversary."
The situation with AI-enhanced weapons was different from earlier military analysis in that the private sector is way ahead of government research, which was not the case with physical threats, he said.
[...] Last month, hundreds of computer scientists, tech industry leaders, and AI experts signed an open letter calling for a pause for at least six months in the training of AI systems more powerful than GPT-4. Signatories included Apple co-founder Steve Wozniak, SpaceX CEO Elon Musk, and IEEE computing pioneer Grady Booch.
But the prospect of a pause was wholly unrealistic, Payne said. "It reflects the degree of societal unease about the rapid pace of change that people feel is coming down the tracks towards them. But I don't think it is a realistic proposition."
« Proton Expands its Private Tech Ecosystem With a Password Manager | New Map of the Universe's Cosmic Growth Supports Einstein's Theory of Gravity »
Related Stories
Microsoft cofounder Bill Gates says he's "scared" about artificial intelligence falling into the wrong hands, but unlike some fellow experts who have called for a pause on advanced A.I. development, he argues that the technology may already be on a runaway train:
The latest advancements in A.I. are revolutionary, Gates said in an interview with ABC published Monday, but the technology comes with many uncertainties. U.S. regulators are failing to stay up to speed, he said, and with research into human-level artificial intelligence advancing fast, over 1,000 technologists and computer scientists including Twitter and Tesla CEO Elon Musk signed an open letter in March calling for a six-month pause on advanced A.I. development until "robust A.I. governance systems" are in place.
But for Gates, A.I. isn't the type of technology you can just hit the pause button on.
"If you just pause the good guys and you don't pause everyone else, you're probably hurting yourself," he told ABC, adding that it is critical for the "good guys" to develop more powerful A.I. systems.
[...] "We're all scared that a bad guy could grab it. Let's say the bad guys get ahead of the good guys, then something like cyber attacks could be driven by an A.I.," Gates said.
The competitive nature of A.I. development means that a moratorium on new research is unlikely to succeed, he argued.
Originally spotted on The Eponymous Pickle.
Previously: Fearing "Loss of Control," AI Critics Call for 6-Month Pause in AI Development
Related: AI Weapons Among Non-State Actors May be Impossible to Stop
(Score: 2, Interesting) by Anonymous Coward on Sunday April 23, @01:08PM (5 children)
>no theory on how nefarious groups might behave using the tech:
I don't know about groups, but for individuals: hiring an assassination by a human agent is both expensive and fraught with risk of your agent turning you in to the authorities, possibly even blackmailing you after the act with threat of exposure while they reside in an unknown non-extradition country...
Meanwhile, the hardware required to carry a sniper rifle with remote trigger and video scope to a clandestine perch and clear out quickly after taking a shot can be had for something on the order of $10k, and you can resell the parts after the deed is (/ deeds are) done.
"AI" can help with facial recognition, target tracking, autopilot, even target selection if the shooter is trying to effect maximum political impact for the minimum risk of exposure.
In theory, the mechanics were all possible with radio control systems 50+ years ago. In practice, the skills required were so rare that an RC assassin would likely be identified just from their piloting skills. Today, that bar is much lower to where most of the population could pull off the feat without their neighbors even knowing they had the capability.
(Score: 0) by Anonymous Coward on Sunday April 23, @01:59PM (3 children)
AI drones with face recognition + C4 and some shrapnel. Activate and launch a bunch of drones remotely, send them to target zone, drones locate target (bonus points if they communicate with each other to locate target faster), head to it and blow up on the target or very close to it.
They don't have to be pure quadcopter style drones which have much lower range. You could do a drone with wings for more range and speed for a given payload.
(Score: 2) by JoeMerchant on Sunday April 23, @08:35PM (2 children)
If you are well funded and in a war zone, sure, kamakazi drones are very effective.
AC describes a system that delivers ordinary bullets from ordinary rifles, like you might have someone buy for you at a Walmart, paying cash in a faraway location.
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by DannyB on Monday April 24, @03:47PM (1 child)
Sometimes bad guys want you to pay by purchasing, say, Apple cards. Then give the number over the phone (or other medium) to the other party in another country. Of course, that doesn't work if the receiving party wanted Nike shoes.
How often should I have my memory checked? I used to know but...
(Score: 3, Interesting) by JoeMerchant on Monday April 24, @05:21PM
Fun thing about today's environment: the bad guys can contact ANYBODY 1:1 from anywhere in the world, communicate virtually undetectably (Steganography + Strong Encryption) pay them in all sorts of basically un-traceable ways (NOT Bitcoin, but Apple cards or so many other similar things) and provide them with tools and instructions.
If I were the FBI/CIA/NSA, I'd be trawling the ChatGPT queries looking for stuff like: "Which politicians, if they suddenly retired, would most likely be replaced by other politicians with maximally differing voting records on X, Y, or Z?"
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2, Informative) by khallow on Monday April 24, @04:38AM
If you're reselling parts instead of destroying evidence, then you're doing it wrong. Those parts are some of the most damning physical evidence tying the assassination to you. The more completely it is lost or destroyed, the better your odds.
(Score: 5, Interesting) by Rosco P. Coltrane on Sunday April 23, @01:30PM (8 children)
You can't run a GPT-4-level AI with a full training set on commodity hardware, and even a GPT-3-level one is a stretch. State militaries have access to vast resources, and can enlist the help of private cloud providers to run their killer AIs. Terrorists typically don't have access to high tech, and if they were to run a client on their nefarious low-tech hardware, all the US would have to do is ask Microsoft to disable their accounts.
So no. As things stand today, rogue AIs are perfectly stoppable.
(Score: 1, Informative) by Anonymous Coward on Sunday April 23, @02:14PM (6 children)
Deploy a bunch and maybe one of them would succeed. Flying and/or ground.
(Score: 4, Insightful) by Rosco P. Coltrane on Sunday April 23, @03:09PM (5 children)
You don't need AI at all for that.
But the article would generate fewer clicks if it didn't plug the word "AI". So... ya know, AI this, AI that... AI is the new blockchain.
(Score: 0) by Anonymous Coward on Sunday April 23, @05:34PM (4 children)
(Score: 1) by khallow on Sunday April 23, @07:13PM (3 children)
Keep in mind that we want the would-be terrorists to do harder and less effective ways to kill people and destroy stuff rather than the easier ways: exotic computer hacks rather than shooting out insulators at a substation; designer chemical and biological weapons rather than the stuff we already know works (no serious testing required) and is easy to produce; sophisticated AI-driven psychological attacks rather than a few dozen (or few thousand!) bomb threats. The more techy and sexy the means of terrorism is, the less people who will be succeeding at it.
(Score: 0) by Anonymous Coward on Monday April 24, @12:39AM (2 children)
(Score: 1) by khallow on Monday April 24, @01:15AM (1 child)
How much building blowing up happens? There's been two effective cases [wikipedia.org] in the past fifty years in the US, for example - the Oklahoma City bombing and 9/11 attacks. Al Qaeda wouldn't have been satisfied with a few people.
(Score: 0) by Anonymous Coward on Monday April 24, @09:40AM
OK, you and/or the CIA can help them kill more people.
(Score: 5, Informative) by RamiK on Sunday April 23, @04:40PM
You can already run GPT-3.5 grade models on (high-end) commodity hardware using Colossal-AI [github.com]: https://medium.com/@yangyou_berkeley/colossalchat-an-open-source-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline-5edf08fb538b [medium.com]
compiling...
(Score: 0) by Anonymous Coward on Sunday April 23, @01:56PM
.. not a guidebook
Let's avoid all that shall we?
(Score: 1, Redundant) by Mojibake Tengu on Sunday April 23, @04:07PM (1 child)
What does that mean, exactly?
The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(Score: 2, Disagree) by RamiK on Sunday April 23, @04:44PM
(Putting on the Captain Obvious cap) "virtual" is defined as "almost or nearly as described" so "virtually impossible" means "nearly impossible".
compiling...
(Score: 2) by takyon on Sunday April 23, @10:52PM
Let's live in exciting times.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Disagree) by looorg on Sunday April 23, @10:57PM
At the moment you probably don't have to worry to much about it. But as things progress things might change. They will be more common, specialized AI of some kind that doesn't have to be generic and available to everyone will require less hardware. Making them more affordable. Eventually you'll get AI products that you can run yourself without having to be connected and ask the cloud.
That said for a lot of criminal enterprises (terrorists, drugcartels etc) money isn't really a problem, many of them earn billions per year. So the money isn't really the problem, after all small companies can run them and so can various universities and they don't rally make drug cartel $$$. So it's probably more a question of what they would want or use one for. Perhaps they can have it make new interesting designer drugs or something. Otherwise their business are probably AI proof in the regard that they don't really much need for one.
Non-state actor deterrence is probably still about tracking them down and killing them, preferably by dropping Hellfire missiles or something similar on them. It seems to have worked "great" so far when it comes to various terrorist leaders, if you don't count all the collateral damage. The deterrence doesn't even work very well for state actors either -- after all once they have X (like nukes) it's very hard to put it back in the bag. Perhaps in some regard the nuclear deterrence have worked tho, it's not that many that have their own nukes.
(Score: 2) by istartedi on Sunday April 23, @11:22PM
Slaughterbots [youtube.com]. Hopefully sci-fi doesn't become fact.
Appended to the end of comments you post. Max: 120 chars.