Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday July 18 2017, @05:19PM   Printer-friendly
from the what-could-possibly-go-wrong dept.

Submitted via IRC for TheMightyBuzzard

Recently, Russian arms manufacturer Kalashnikov Concern has unveiled their work on a fully automated combat machine. It looks like a drone, but the neural network that controls it allows for some autonomous ability, which is going to make for some very interesting conversation at the upcoming ARMY-2017 forum. Did somebody say war robots?

For that matter, now that neural networks are basically being weaponized, I'm sure there will be some important moral debates about their use in a field of battle. Not the least of which will be: "Isn't this exactly what Skynet wants?"

But, and we've said this many times before, technology is a tool.

It isn't inherently good or bad; that depends entirely on the intentions of the user. In this case, the technology is a weapon, but that is the purview of a military, and I think we can judge them according to their actions instead of their tech.

Plus, the robot is really freaking cool. We'd be doing it a disservice by ignoring that. Let's take a closer look.

We all know that drones are already used in combat, but this robot is no drone.

Drones require operators, and while modern drones do have elements that can acquire targets without human control, they aren't fully autonomous. By using a neural network to control the drone, full autonomy is possible.

So far, there's no word on whether the module will fire without human authorization. What information we do have suggests that the use of a neural network is intended to quickly acquire many targets–something well within the capabilities of modern AI technology.

Source: https://edgylabs.com/war-robots-automated-kalashnikov-neural-network-gun/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by JoeMerchant on Tuesday July 18 2017, @08:23PM (2 children)

    by JoeMerchant (3937) on Tuesday July 18 2017, @08:23PM (#541141)

    The AI decides what you allow it to decide. If mercy killing or population culling are highly undesirable options, they must be costed as such in the evaluation.

    When the first AI drone mistakes your wife or child for an enemy combatant and "neutralizes the threat" you'll start to put more importance on the responsibility assigned to single unit operators - man or machine.

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by meustrus on Tuesday July 18 2017, @08:49PM (1 child)

    by meustrus (4961) on Tuesday July 18 2017, @08:49PM (#541156)

    I said it's still scary. The point is that it isn't Skynet.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 2) by JoeMerchant on Wednesday July 19 2017, @02:29AM

      by JoeMerchant (3937) on Wednesday July 19 2017, @02:29AM (#541281)

      No, we won't need Skynet - the coming destruction of financial markets by quant algorithms should be enough to make life seriously painful in the closer term.

      --
      🌻🌻 [google.com]