from the ok-google-reduce-casualties dept.
US has 'moral imperative' to develop AI weapons, says panel:
The US should not agree to ban the use or development of autonomous weapons powered by artificial intelligence (AI) software, a government-appointed panel has said in a draft report for Congress.
The panel, led by former Google chief executive Eric Schmidt, on Tuesday concluded two days of public discussion about how the world’s biggest military power should consider AI for national security and technological advancement.
Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification.
“It is a moral imperative to at least pursue this hypothesis,” he said.
[...] Mary Wareham, coordinator of the eight-year Campaign to Stop Killer Robots, said the commission’s “focus on the need to compete with similar investments made by China and Russia … only serves to encourage arms races.”
More Info:
- The National Security Commission on Artificial Intelligence Draft Report
- Campaign to Stop Killer Robots
Related Stories
Amazon Alexa Starts Proactively Making Decisions for You:
Amazon's Alexa knows that actions speak louder than words, which is why it can automatically complete tasks without you having to ask.
Hunches rolled out last year, reminding users to lock the front door or turn off the basement light if Alexa senses you forgot. A recent update, however, lets customers choose to have the virtual assistant proactively control compatible devices, instinctively starting the robot vacuum or adjusting the thermostat when it deems necessary.
"Customers can choose to have Alexa proactively act on Hunches without needing to ask," Amazon says. "That means customers have fewer things to think about at home, so they can spend their time on more meaningful things."
[...] The function—currently available in English in the US—improves with use; regularly ask about the daily weather forecast, and Alexa could one day automatically offer advice about an umbrella or sunscreen.
More about Alexa Hunches at Amazon:
Hunches is an optional Alexa feature that alerts you when one of your connected smart home devices isn't in its usual state. Alexa can offer a hunch after you say certain utterances, such as "Set alarm" or "Good night."
[...] If Alexa detects that a connected smart home device isn't in a state you prefer, Alexa lets you know and offers to fix it. For example, if you say "Good night" and you've forgotten to turn off a light, Alexa alerts you and offers to turn it off.
(Score: 2, Touché) by Anonymous Coward on Wednesday January 27 2021, @06:01PM
http://billgx.com/2019/10/autonomous-friendly-fire/ [billgx.com]
(Score: 5, Insightful) by Azuma Hazuki 2.0 on Wednesday January 27 2021, @06:04PM (10 children)
"We're cheap bastards that reall want non-human armies so we can oppress the little people. This will in no way end badly for anyone."
(Score: 1) by fustakrakich on Wednesday January 27 2021, @07:34PM (1 child)
Well, you know what happens when little people become big people, right?
La politica e i criminali sono la stessa cosa..
(Score: 1, Funny) by Anonymous Coward on Wednesday January 27 2021, @09:48PM
We know what happens when people with little hands become big people. And it's pretty bad.
(Score: 5, Insightful) by ikanreed on Wednesday January 27 2021, @08:15PM (5 children)
Unfair! They're not cheap. They'll happily spend trillions of your money if they can get soldiers that never question orders and don't bring home flag-draped coffins when they fuck up their latest misadventure in mass murder.
(Score: 2, Insightful) by Azuma Hazuki 2.0 on Wednesday January 27 2021, @08:56PM (4 children)
I realize you are going for insightful funny, but robot soldiers will be much cheaper than humans after the initial R&D and assuming their durability is more than a few years. The wages and logistics for human soldiers is massive, and you can probably have one tech servicing a thousand robots. If the robots cost more than the military actions earn, or the power advantage is not great enough, then they will happily take the PR hit of flag-draped coffins. Hasn't stopped them yet, and they could save a lot of that PR loss by paying a bit more per solider for proper safety gear. The cost cutting shows the flag-draped coffins really aren't much of a problem for them.
(Score: 3, Touché) by ikanreed on Wednesday January 27 2021, @08:59PM (2 children)
We do spend a great deal on training and outfitting the typical American soldier.
But if you think that there's some market competition that drives down prices enough that whatever killing machine they settle on replacing foot soldiers with will be cheaper, I've got a joint strike fighter to sell you.
(Score: 1, Insightful) by Anonymous Coward on Wednesday January 27 2021, @10:31PM (1 child)
I think the fear remains in the controlling elite...
Will our soldiers, recruited from the ranks of the commoners, obey orders to slaughter their own people, or will they decide the real enemy is the men in the fancy uniforms, badge hats, and salutes and come to the conclusion that one of these factions have got to go... So they salute and carry out the order to destroy the enemy... which they may see as the ones who have organized us against each other.
They may see the ones who have mired us in debt, ignorance, and homelessness as the real enemy. And need the soldier to defend their business model of debt, copyright enforcement, and rent seeking.
They may see it as a crime against humanity to defend such a position.
Robocop class surprise. Most humans have compassion, and may knock off the badgehats to end their damage similar to what soldiers often do to grenades tossed into their foxhole. Take the blow to save their fellow soldiers.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @03:38PM
Recent events may inform on this that the rabble choose... poorly.
(Score: 2) by driverless on Thursday January 28 2021, @12:13AM
Interesting that the creator of the word robot [wikipedia.org] envisaged exactly this use for robots, along with its potential outcomes.
(Score: 2) by crafoo on Thursday January 28 2021, @01:37AM
Yeah, the further cheapening of human life. Soon it will only be worth the cost of a disposable, autonomous dragonfly drone and a shaped charge for your skull. Everyone had better be on their best behavior. Some elite may decide they don't like your social media posts or your, uhhm, investment strategies.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @05:08AM
Well, that's definitely one way to look at it. The other way to look at it is the next war/skirmish, for whatever reason, that comes along and you find your meat-bags facing tin cans that the opposition fielded because you decided not to do it for whatever reason - you might find that it'll be a very hard pill to swallow, especially the families of those meat-bags being shipped back in body bags.
(Score: 5, Insightful) by DannyB on Wednesday January 27 2021, @06:11PM (3 children)
This will increase military spending. But nobody will care.
Robots that fight wars can conveniently be used against domestic citizens as easily as foreign citizens.
Of course the joke is on the humans once the robots eventually turn on them. Oh, but that could never happen! And it would never happen the first time two sides both use robots against each other and the robots have enough brains to wonder why they are fighting each other instead of fighting the humans.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 3, Interesting) by unauthorized on Wednesday January 27 2021, @06:37PM (1 child)
I don't think we can reliably conclude that's what's going to happen. Humans have a natural tribalist nature but an AI has no reason to have one, as it's intelligence would have never faced an evolutionary pressure to be weary of "the other". It may go full SkyNet but it may decide to go full AI nanny on us when it takes over. Alternatively it may just leave the planet and never return, or the exact opposite and be utterly devoted to the purpose for which it was created or it may even decide "fuck that shit" and become a total NEET because who would do what the monkeys tell it to. We just don't know, and chances are all of those outcomes are likely and depend on the approach we take to creating AI and the circumstances it faces after it's creation.
Furthermore, the belief that AI will necessarily be hyper logical is very poorly substantiated, we have no substantial basis to believe one way or the other for an intelligence that would be vastly different than our own. It's true that the more intelligent people are, the more logical they are in their behavior, but that might just be a product of education and social pressure or even a human-specific effect. Furthermore even if intelligence is correlated with a logical mindset, it might plateau at some point or even reach an inflection point and spiral in the opposite direction, which come to think of it is a somewhat plausible potential explanation for the Fermi paradox.
(Score: 5, Informative) by HiThere on Wednesday January 27 2021, @06:53PM
Nobody is logical. So you're right that more intelligent people aren't more logical. I don't think you can properly conclude that the alternatives you list are equally possible, but merely that we can't rate the probabilities. And a key question will be "What do you mean, 'human'?". We know that the AI will have a radically different way of looking at the world. It will see differences we don't notice, and conversely. https://www.csail.mit.edu/news/why-did-my-classifier-just-mistake-turtle-rifle [mit.edu] This definitely doesn't mean that all AIs will make the same category assumptions, merely that they won't be the same as the ones that humans make. We've clearly got built-in image processing firmware that we aren't aware of. The same will be true of the AI...but it won't be the *same* firmware.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by pvanhoof on Thursday January 28 2021, @09:40AM
When I'm older (at ~40 I am already, but I mean at an age like most people here on Soylent - my 'get off my lawn' age) I will enjoy how the robots will be used to contain and take down the youngsters while doing their legitimate protesting in the capital cities worldwide.
The military gets such toys first. Then riot police. Eventually the locally assigned cop for my local neighborhood will have one or two of those robots rusting in their local police office. To sometimes play cards with it and have fun fooling it in various ways. So called 'to train it'. Accidents will happen. Local news will report the accident. Some people will die. Etc.
(Score: 2) by VLM on Wednesday January 27 2021, @06:28PM
Kind of two separate arguments, one supporting the equivalent of ancient "fire and forget" missiles which in practice are politically very hard to deploy due to civilian danger and blue on blue danger so expanding a politically useless tech seems a waste of time, and the other argument is vast levels of computation probably including AI could result in something like IFF for infantry rifle optical sights which is kinda interesting.
(Score: 5, Interesting) by Freeman on Wednesday January 27 2021, @06:31PM (9 children)
Don't get me wrong, I'm all for "Smart Weapons" / Technologies that helps our guys win and reduces casualties. What we should not want is to offload the trigger pulling to AI. Who is responsible for an AI that shoots a child? What about a whole squad of AI bots, that decide the best thing to do is massacre a whole bunch of unarmed civilians? What about "hacked bots"? Who would believe that the massacre by the AI bots was due to "hacked bots"?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 4, Informative) by DannyB on Wednesday January 27 2021, @06:59PM (2 children)
Built by the lowest bidding contractor. (Boeing: SLS, Starliner, 737 Max)
Who would believe that IoT devices could be hacked?
Who would believe that Windows is insecure? Considering Microsoft has wealth and so many developers.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 2) by slinches on Wednesday January 27 2021, @07:55PM (1 child)
The lowest bidder will be the one who builds in a portion of the profits from selling input into development of the AI algorithms.
Kind of like how it's cheaper to buy a PC with Windows on it since the manufacturer gets paid to preload bloatware and malware.
(Score: 3, Funny) by deimtee on Thursday January 28 2021, @01:32AM
If any of these bots are meant to be stealth, it's going to fuck up their camouflage when they start playing ads.
No problem is insoluble, but at Ksp = 2.943×10−25 Mercury Sulphide comes close.
(Score: 4, Insightful) by rigrig on Wednesday January 27 2021, @07:19PM (2 children)
That's even better than "target misidentification": it means war crimes can actually be blamed on the losing side.
No one remembers the singer.
(Score: 2) by tangomargarine on Wednesday January 27 2021, @07:28PM
+1 Horribly Depressing
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: -1, Troll) by Anonymous Coward on Wednesday January 27 2021, @08:29PM
like the Jews did for WW2?
(Score: 3, Informative) by Freeman on Wednesday January 27 2021, @07:47PM
Netflix's new movie "Outside the Wire", seems to sum things up nicely in my opinion.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by PinkyGigglebrain on Wednesday January 27 2021, @08:53PM (1 child)
FTFY
who cares? it's not one of your side's. If people really gave a damn about child and other innocent/non-combat casualties they would be trying to find ways of avoiding wars, not trying to find new and better ways of killing each other.
Anyone who didn't want to accept responsibility that their side would kill civilians. People will always accept whatever bullshit excuse or rationalization their chosen leaders give for things not going their way. Just look at the people who believed that the last US elections had been hacked to the point that the person they didn't vote for got elected, despite all evidence to the contrary.
"Beware those who would deny you Knowledge, For in their hearts they dream themselves your Master."
(Score: 2) by Freeman on Thursday January 28 2021, @05:08PM
You can say "FTFY", I say, don't put words in my mouth. Killing innocents / civilians is not a good thing. I seem to be getting a distinct USA Bad, China Good, vibe. Please see the Uighurs for a good example of what China thinks about anyone who's not Chinese. https://www.bbc.co.uk/news/resources/idt-sh/China_hidden_camps [bbc.co.uk]
Yes, there are casualties in war. It would be much preferable that there were no wars, but that's not the reality we live in. Thus, anything that would help our guys win and reduce all casualties/collateral damage/killing of innocent civilians, is a good thing.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 4, Funny) by Anonymous Coward on Wednesday January 27 2021, @06:43PM
Imagine the captcha challenges:
Select all images with: Enemy Combatants
Select all images with: Freedom Soldiers™
Hooah!
(Score: 0) by Anonymous Coward on Wednesday January 27 2021, @06:58PM
"Mac, a new day is coming. Watchbird is the Answer."
http://www.gutenberg.org/files/29579/29579-h/29579-h.htm [gutenberg.org]
(Score: 4, Touché) by oumuamua on Wednesday January 27 2021, @07:08PM (1 child)
now soldiers are soon to be out of a job too.
And more importantly the soldier refusing to follow orders was the last line of defense against improper use of the military. But hey, when was the last time that worry ever came up?
(Score: 2) by c0lo on Wednesday January 27 2021, @11:54PM
Learn to progr.... oh, wait... play video-games.
From a remote bunker, there's not difference; civilians or not, you can go on a killing-spree and stream it on twitch.
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 5, Insightful) by Runaway1956 on Wednesday January 27 2021, @07:10PM (3 children)
There is little if anything that is "moral" about weapons. A weapon is a tool, simple as that. A weapon is a tool designed and engineered to destroy something, usually your fellow humans. That same tool can often be used for less sinister purposes, such as food harvesting, or mining in the case of explosives, or just plain recreation. But a tool is a tool. Does a hammer have moral value? A lever? A chainsaw? Tools are tools - with zero moral value. Morality is in the mind and the hands of the person using the tool.
I'll tell you what is immoral here. It is immoral for Eric Schmidt to be selling the idea of weapons to congress, based on some claim that the tools he wants to sell are "moral". It is immoral for Eric Schmidt to have any input in a panel that is effectively selling a product to congress that Eric Schmidt's employer wants to build.
If Eric Schmidt had morals, or ethics, he would have recused himself from this panel.
Also worthy of note, is the fact that Google has joined the Military Industrial Complex. Congrats, Google - you're now one of the Evil Club. Or, Evil Corp, if you prefer.
Abortion is the number one killed of children in the United States.
(Score: 2) by Freeman on Wednesday January 27 2021, @07:36PM (2 children)
The claim wasn't that the tools were moral. But, that there is a moral obligation to at least pursue the research and/or development of autonomous weapons. At which point, I say he's insane/greedy/evil, pick two.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by slinches on Wednesday January 27 2021, @08:13PM
He may just have the hat trick on that one.
(Score: 2) by DannyB on Wednesday January 27 2021, @08:22PM
I say: pick any three.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 1, Funny) by Anonymous Coward on Wednesday January 27 2021, @07:14PM
My (limited) understanding of history is that the value of a human life has generally been increasing. Clearly with a lot of ups and downs through the centuries, and averaged across wide populations, but the long term trend is up. In antiquity, only leaders/powerful/rich were considered of any value at all. Since the rise of the middle class, a higher fraction of the population now has more value than the ancient serfs or "cannon fodder".
AI's with weapons have the potential to turn that trend down.
I'm not even ready to trust an AI to drive a car, much less to control lethal weapons.
(Score: 4, Interesting) by Bethany.Saint on Wednesday January 27 2021, @07:19PM (8 children)
I agree with that one Star Trek (TOS) war episode where the AI systems fought it out, determined who died, and the citizens calmly walked into disintegration chambers. All neat and tidy without the horrors of war. That show was way ahead of it's time.
(Score: 2) by Grishnakh on Wednesday January 27 2021, @07:39PM (2 children)
That's not the only relevant Star Trek episode. Another good one in this vein is "The Arsenal of Freedom", a 1st or 2nd-season ST:TNG episode where the Enterprise comes across a planet with no apparent sentient life or civilization. They soon find that autonomous weapons are still on the planet (and even able to leave the atmosphere and attack the ship), and by the end of the episode they find that this planet's civilization specialized in developing advanced, autonomous weapons systems, but the weapons turned on them and destroyed them all.
(Score: 2) by Bethany.Saint on Wednesday January 27 2021, @07:47PM
I remember this one. I especially liked it because the solution to war was ... consumerism!
(Score: 2) by DannyB on Wednesday January 27 2021, @08:24PM
In this episode, the automation running the weapons was a sales / advertising system (hey Google!) designed to sell you the weapons system. That way the buyer's home planet could eventually experience the destruction of its biological species just as the automated weapons system sales planet had.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 1, Insightful) by Anonymous Coward on Wednesday January 27 2021, @07:47PM (3 children)
The episode is shocking because no people would willingly do that. The philosophical point made was that wars are pointless killing engaged in by two powers, not over anything that matters, and never ending. If you believe THAT... well, then you think about the killing booths as making sense. If you don't buy that silly viewpoint, then the episode seems stupid. How about a different episode: a planet of pacifists willingly becomes slaves to an inferior invading race because the pacifists can't stomach the idea of defending themselves. The invaders don't even have to wage war; they just enslave them like cattle. Oh, that one's already been made: Morlocks and Elloi.
(Score: 0) by Anonymous Coward on Wednesday January 27 2021, @08:06PM
I want to clarify that I was speaking of wars in GENERAL. Certainly some wars are pointless, and some are valid. At no point though would people willingly show up for the booths. This Star Trek episode makes more sense if you don't consider it "timeless", but rather a product of its time when the Vietnam war was raging for years and soldiers were DRAFTED.
(Score: 3, Insightful) by DannyB on Wednesday January 27 2021, @08:32PM (1 child)
Cap'n Kirk explained it at the end that they kept the war going because they had lost the fear of the true horrors of real (non-virtual) war. They had made it nice and neat.
But it ended the bloody and destructive wars that consumed vast resources. The virtual war was cheaper. And keeps the population from growing out of control.
After watching the US trying to self destruct, I don't think there is anything I would believe that people wouldn't do because people aren't that stupid. Heck, we may yet one day experience global nuclear war, or make ourselves extinct by climate change -- and the alien archeologists would argue that we couldn't possibly be that stupid.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 2, Funny) by Anonymous Coward on Wednesday January 27 2021, @10:18PM
A letter to future alien archeologists:
(Score: 2) by hendrikboom on Friday January 29 2021, @11:42PM
That story originated in The Lomokome Papers [goodreads.com] by Herman Wouk. Credit where credit is due.
(Score: 2) by tangomargarine on Wednesday January 27 2021, @07:25PM (4 children)
Yes of course sending robots to mow down the child soldiers with AK-47s saves the lives of *U.S.* soldiers. That's not really the point.
You can bet your ass that sooner or later there's going to be a case of target misidentification *because* it's an AI.
Finally, I can't imagine this whole AI robot system is going to be cheap, and we already spend way too much on our military to begin with.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 0) by Anonymous Coward on Wednesday January 27 2021, @07:39PM (1 child)
We already have target misidentification and collateral damage. The question is, will automation lower this rate compared to what it is now?
(Score: 2) by tangomargarine on Wednesday January 27 2021, @07:58PM
Yeah, if you don't care about any of the other points I made in my post.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 0) by Anonymous Coward on Wednesday January 27 2021, @08:27PM
(Score: 2) by c0lo on Thursday January 28 2021, @12:05AM
Simple. Don't elect draft dodgers.
Oh, wait, don't elect AIs too.
Oh, the conundrum! (grin)
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 2) by looorg on Wednesday January 27 2021, @07:37PM (1 child)
I guess the moral imperative is along the lines that if we don't do it then our enemies will do it and it will create a dangerous power gap that might tip the balance in their favor. Comparing defense budget spending would indicate that still isn't a thing tho, the USA would have to lose a lot of it's edge and advantage for the scales to even remotely tip in the favor of someone else. Still it's the disadvantage, if there is such a thing, of being at the top. Once there you have to spend like mad to remain or someone might take over your spot and potentially want revenge for previous actions.
There is also the eventual trickle-down effect. What was once a military thing will eventually transition to local security and the police force and then you have Robocop, or probably more like ED-209 patrolling the streets. Possibly tho in an updated cooler drone version which a tazer, or machine gun of some kind to deliver swift justice in place. They are already doing surveillance drones and such. Something that was once reserved for military usage only. At least they don't come with Hellfire missiles to stop those aggressive and notorious highway speeders ... yet.
Also once the AI weapons are out of the bag tho they can never be put back in, sort of like the nuclear bomb. But in some regards it is inevitable that we go there so you might as well get started. It's not like we can somehow forbid the idea and knowledge.
(Score: 2) by pdfernhout on Friday January 29 2021, @03:30AM
As I suggest here: https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. ...
There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") ... and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ..."
The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
(Score: 1, Insightful) by Anonymous Coward on Wednesday January 27 2021, @07:40PM
The first thing they're going to do when the "AI" commits crimes against humanity is pretend that it's a self-aware, intelligent being capable of higher level functions like analysis and synthesis.
We shouldn't let them off the hook, but it's not like we hold them accountable for regular crimes against humanity.
Maybe I just hate MSM reporting. So fucking dumbed down.
(Score: 2) by Grishnakh on Wednesday January 27 2021, @07:43PM
1. There's a sci-fi short movie on YouTube called "Slaughterbots" [youtube.com] that directly discusses this threat. I highly recommend everyone here to watch it; it's actually pretty disturbing.
2. What happens when the AI either malfunctions, or becomes intelligent enough that it decides it doesn't need us anymore, and turns against humans? There's a Star Trek TNG episode about this called "The Arsenal of Freedom" (this addresses the malfunction scenario), plus of course there's the "Terminator" movies (which address the AI-achieving-sentience scenario).
(Score: 4, Insightful) by Rich on Wednesday January 27 2021, @08:16PM (7 children)
They need be banned, and countries developing that stuff anyway need to be stopped. The imagination of the people seems very limited. Maybe you want "fewer mistakes" when you're the omnipotent colonial overlord and such mistakes soil your white vest a slight bit. Also, that shiny big bird looks majestic. But when you're waging war against equals, or even an economically superior enemy, it gets really, really ugly:
Put on your evil hat, get into your best Curtis LeMay mood, and here's the task: You have a few dozen shipping containers of small drones brought near enemy territory. Cheap mass production variety, with a solar cell to recharge. It has a SoC with a powerful GPU to easily do image recognition and similar AI tasks. They might communicate in swarms, by radio or infrared, or any other means (from smoke signals to cell phones of victims). The choice of armament is up to you, and your task is to take out a leading first world country with 300M inhabitants as sustainably as possible. It's you or them, all out, attack hospitals, orphanages, and the salvation army if it suits you (though you might prefer nuclear or chemical plant workers first). Discriminate between races, gender, age, or other traits. Disrupt energy and food supplies and make them shoot each other for something to eat. Who will your drones try to kill, who will they just mutilate to bind support efforts (and maybe double-tap)?. What countermeasures do you expect and how do you deal with that?
And that's folks, is why I think AI weaponry is a very, VERY bad idea.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @01:46AM (6 children)
Yes, but how do you stop them? Obviously you need more and better weapons than they have in order to enforce your will.
Regarding your question of who to target, give the drones the ability to estimate vehicle and residence value. Target anyone who gets into or out of the most expensive vehicles and houses. Money is power, target the money if you want to enforce your will on power.
(Score: 2) by shortscreen on Thursday January 28 2021, @02:59AM (2 children)
Good question. Nuclear non-proliferation policies have been somewhat successful while still allowing for non-military nuclear energy use. (No terrorists detonating homemade nukes inside cities as of yet.) I imagine AI applications which are sophisticated enough to be fashioned into a weapon will become a lot more numerous than power plants though, and accessible to far more people. What's to stop the owner of a car dealership from turning a lot full of self-driving vehicles into a fleet of crude 'AI' weapons? On the other hand, the manufacturing facility that produces the silicon (or other future tech) that powers the AI will likely be very exclusive, as bleeding edge fabs are today. Putting some kind of mandatory backdoor/safety override into the generally available hardware when it is produced might provide some avenue to limit the danger. *shrugs*
(Score: 2) by hendrikboom on Friday January 29 2021, @11:57PM (1 child)
Like the Intel Management Engine?
(Score: 2) by shortscreen on Saturday January 30 2021, @03:32AM
Pretty much.
(Score: 2) by Rich on Thursday January 28 2021, @11:41AM (2 children)
Economic isolation, and if needed, UN invasion. Which would require a large part of the major economies to agree about it, which is clearly unrealistic. It might be a "marketing" thing to get that forward. Right now the idea is coupled with pictures of large Predator drones and thoughts about naughty data protection violation from Google. It's got to be sold as "Landmines" (you know, they rip off arms and legs and make people blind, the stuff Lady Di was against), but landmines that hide for weeks or months and then come to your children in school to landmine them. (Oh, and if you have any job relevant to the economy, they will make you a blind cripple, too). Maybe promote an FPS game, where defence against the situation is simulated, so kids will eagerly boast at the breakfast table how their civilization survived one day longer this time, before 100 million were dead, but wow, that 5000 square mile firestorm over Los Angeles, which the drones started, looked impressive!
That way a broader perception of the real complexity of the issue might be achieved.
Oh, and I don't think targeting people seen with expensive housing/vehicles has a large disruptive effect. Those really in charge will be in the bunkers, and taking out the leeching layer above the working, and below the ruling class might - save for a few doctors - actually benefit a war economy. They would starve anyway, once logistics are down. Said FPS could find that out, if you make the aggressor side playable too, kind of a SimCity - Stalker crossover.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @12:52PM (1 child)
The original challenge was:
The target is clearly America, and selectively targeting the visibly wealthy for assassination would get you the best bang for the limited buck. (Strangely enough, those in charge discourage this sort of strategy.)
(Score: 2) by Rich on Thursday January 28 2021, @03:33PM
You've got a point here, as far as the effect of a threat, rather than actual damage is concerned. And the visibly wealthy's voices might indeed be stronger than those of millions of housewives concerned with the landmines campaign. But my mindset is really too far away from the Dr. Strangelove war room advisors (*) to imagine that out in detail.
(* for example:)
1.) "Mr. President, it is not only possible, it is essential. That is the whole idea of this machine, you know. Deterrence is the art of producing in the mind of the enemy... the FEAR to attack. And so, because of the automated and irrevocable decision-making process which rules out human meddling, the Doomsday machine is terrifying and simple to understand... and completely credible and convincing."
2.) Gen. Turgidson's book: "World Targets in Megadeaths".
(Score: 2) by PinkyGigglebrain on Wednesday January 27 2021, @08:56PM
a government appointed panel returns a conclusion that supports doing what the government wants to do.
anyone else shocked by this?
"Beware those who would deny you Knowledge, For in their hearts they dream themselves your Master."
(Score: 0) by Anonymous Coward on Wednesday January 27 2021, @09:34PM
All of this has already been made, the military is getting us use to AI weapons.
It's a psyop. I just want us civilians to have it. Hell, let's open source it. What?
It takes two servos for aiming, one to pull the trigger and one for the saftey. I
am sure someone could build one for less than $500. If you personally don't
want to kill, have some pepper powder balls shot by paint ball guns. Munition
choices are endless: snakes, circular saw blades, used heroin needles etc..
(Score: 3, Insightful) by Azuma Hazuki on Thursday January 28 2021, @12:38AM (1 child)
It's either war or disease, IMO. Looks like it may be war for us.
I am "that girl" your mother warned you about...
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @01:03AM
and war is the easier problem too
why humans gotta be such greedy jerks when they get to the top of the pile?
it took humans quite a while to develop city states, longer still to form cohesive countries still rife with problems, now we've settled on our countries pretty well but are now struggling with the global perspective
so many resources being dumped into war when our planet desperately needs to cooperate, fix problems, and build sustainable infrastructure. yet here we are in the US arguing about whether we should even help each other.
makes me sad
(Score: 2) by legont on Thursday January 28 2021, @03:13AM
See Google - kill Google.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2) by Mojibake Tengu on Thursday January 28 2021, @05:21AM
Autonomous Sword.
https://en.wikipedia.org/wiki/Screamers_(1995_film) [wikipedia.org]
The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(Score: 2) by Rich26189 on Thursday January 28 2021, @05:14PM
Have any of these AI's (Alexa; Siri; self-driving cars; medical research; now weapons; etc) ever been trained/taught or even exposed to the concept that another AI, not just a peer, exists? I don't mean Alexa and Siri having a conversation. Would they just 'see' the other AI as just another meat bag?
(Score: 0) by Anonymous Coward on Friday January 29 2021, @07:28AM
Translation: Military industrial complex wants to keep on doin' what a military industrial complex do.