Submitted via IRC for TheMightyBuzzard
Recently, Russian arms manufacturer Kalashnikov Concern has unveiled their work on a fully automated combat machine. It looks like a drone, but the neural network that controls it allows for some autonomous ability, which is going to make for some very interesting conversation at the upcoming ARMY-2017 forum. Did somebody say war robots?
For that matter, now that neural networks are basically being weaponized, I'm sure there will be some important moral debates about their use in a field of battle. Not the least of which will be: "Isn't this exactly what Skynet wants?"
But, and we've said this many times before, technology is a tool.
It isn't inherently good or bad; that depends entirely on the intentions of the user. In this case, the technology is a weapon, but that is the purview of a military, and I think we can judge them according to their actions instead of their tech.
Plus, the robot is really freaking cool. We'd be doing it a disservice by ignoring that. Let's take a closer look.
We all know that drones are already used in combat, but this robot is no drone.
Drones require operators, and while modern drones do have elements that can acquire targets without human control, they aren't fully autonomous. By using a neural network to control the drone, full autonomy is possible.
So far, there's no word on whether the module will fire without human authorization. What information we do have suggests that the use of a neural network is intended to quickly acquire many targets–something well within the capabilities of modern AI technology.
Source: https://edgylabs.com/war-robots-automated-kalashnikov-neural-network-gun/
(Score: 2) by looorg on Tuesday July 18 2017, @07:26PM (3 children)
While it could be both a legal and moral question as it is know, as far as I know, it's not legal for drones to decide for themselves of where to drop the Hellfire, a human must make the kill command or decision -- as noted in the submission. How about if you feed the killbot (or whatever we like to call them) a list of targets? Would that be ok? Or will Clippy-the-angel-of-death have to come up as some kind of popup window to ask permission before wasting someone in a hail of bullets? Perhaps legal grounds can be found in the ban of landmines (or at least anti-personal landmines, but then neither the USA, Russia or China signed the Ottawa Treaty on land mines either so they could just not give a fuck? After all landmines are an autonomous weapon, of sorts, dropped or planted somewhere that act on it's own, sure it doesn't have some fancy AI neural network to make decisions but usually just requires pressure to work.
I guess until ED-209 comes along there might not be much talk about it. Lets just hope they have slightly better prototype testing for this warbot.
(Score: 0) by Anonymous Coward on Tuesday July 18 2017, @07:36PM (1 child)
Does the human player need to know the drone is killing real people or can we get some Call of Duty trolls to work the controller remotely?
(Score: 3, Funny) by looorg on Tuesday July 18 2017, @07:57PM
I'm sure the Pentagon can afford to hire one or two philosophy major(s) to argue what is real and not.
(Score: 0) by Anonymous Coward on Wednesday July 19 2017, @06:52PM
Hahahaha... legal? You do realize that the laws are written by those who have won the war, don't you? And besides, when has legality ever been an issue if the State really wants to fuck someone up?