from the what-could-possibly-go-wrong dept.
Submitted via IRC for TheMightyBuzzard
Recently, Russian arms manufacturer Kalashnikov Concern has unveiled their work on a fully automated combat machine. It looks like a drone, but the neural network that controls it allows for some autonomous ability, which is going to make for some very interesting conversation at the upcoming ARMY-2017 forum. Did somebody say war robots?
For that matter, now that neural networks are basically being weaponized, I'm sure there will be some important moral debates about their use in a field of battle. Not the least of which will be: "Isn't this exactly what Skynet wants?"
But, and we've said this many times before, technology is a tool.
It isn't inherently good or bad; that depends entirely on the intentions of the user. In this case, the technology is a weapon, but that is the purview of a military, and I think we can judge them according to their actions instead of their tech.
Plus, the robot is really freaking cool. We'd be doing it a disservice by ignoring that. Let's take a closer look.
We all know that drones are already used in combat, but this robot is no drone.
Drones require operators, and while modern drones do have elements that can acquire targets without human control, they aren't fully autonomous. By using a neural network to control the drone, full autonomy is possible.
So far, there's no word on whether the module will fire without human authorization. What information we do have suggests that the use of a neural network is intended to quickly acquire many targets–something well within the capabilities of modern AI technology.