Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Saturday November 15 2014, @12:15PM   Printer-friendly
from the death-from-above dept.

On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Halfway to the missile's destination, it severed communication with its operators and alone without human oversight the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.

Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill -- prompting the U.N. to hear testimony from representatives of its member states about whether or not military robots should be able to designate targets without direct human intervention.

"Killer robots would threaten the most fundamental of rights and principles in international law," Steve Goose, arms division director at Human Rights Watch, an international non-governmental organization, told the AFP news agency. "We don't see how these inanimate machines could understand or respect the value of life, yet they would have the power to determine when to take it away."

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Funny) by wonkey_monkey on Saturday November 15 2014, @12:52PM

    by wonkey_monkey (279) on Saturday November 15 2014, @12:52PM (#116180) Homepage

    foreach(target_list as target) {
        switch (target) {
            case 'eeny' :
            case 'meeny' :
            case 'miny' : continue;
            case 'mo' : destroy();
        }
    }

    --
    systemd is Roko's Basilisk
    • (Score: 0) by Anonymous Coward on Saturday November 15 2014, @04:15PM

      by Anonymous Coward on Saturday November 15 2014, @04:15PM (#116206)

      foreach(target_list as target) {

      if (wonkey_monkey) { target = 'mo'; }

      switch (target) {
                      case 'eeny' :
                      case 'meeny' :
                      case 'miny' : continue;
                      case 'mo' : destroy();
              }
      }

      Wow, I've never hacked before. That was fun.

    • (Score: 2) by davester666 on Saturday November 15 2014, @11:23PM

      by davester666 (155) on Saturday November 15 2014, @11:23PM (#116276)

      This line:

                        case 'miny' : continue;

      will be fixed to:

                        case 'miny' :

  • (Score: 4, Interesting) by Justin Case on Saturday November 15 2014, @01:15PM

    by Justin Case (4239) on Saturday November 15 2014, @01:15PM (#116184) Journal

    This is where all the science fiction writers got it wrong. When autonomous robots finally arrive, they will not be pseudo-slaves ushering in an era of leisure and luxury for all. They will be programmed to serve and protect their owners and to hell with the rest of us.

    Said owners are likely to be the usual crop of arrogant self-confident "git er done" types who never worry about what might go wrong, because by the time it does they'll have moved on. Except that when the robot army attains sentience and turns on its masters, there won't be anyplace to move on to.

    Eventually, then, the asshats of the human race will be purged, which is likely a good thing. Too bad the rest of us won't be around to see it.

    • (Score: 0) by Anonymous Coward on Monday November 17 2014, @06:45AM

      by Anonymous Coward on Monday November 17 2014, @06:45AM (#116618)

      The machines won't turn on their masters. They are machines.
      The masters will turn on each other, which will have the same effect.

      PKD's "Second Variety" seems oddly optimistic.

  • (Score: 1) by gznork26 on Saturday November 15 2014, @03:25PM

    by gznork26 (1159) on Saturday November 15 2014, @03:25PM (#116199) Homepage Journal

    In a way, we're replaying a scene from hisory. Nations banded together to declare how wars were to be fought. Doing so made those nations seem 'civilized' and any player that flaunted those rules was 'barbaric', subhuman, and not worth treating with respect. Only this time, the distinction is whether people choose the target, rather than whether those targets are military in nature, and the weapons are nerve agents.

    Mark Twain's most dangerous work comes to mind: "The War Prayer"

  • (Score: 3, Interesting) by wantkitteh on Saturday November 15 2014, @07:16PM

    by wantkitteh (3362) on Saturday November 15 2014, @07:16PM (#116234) Homepage Journal

    Currently, only 4% of drone strike victims are even vaguely identified - saw a great presentation at ORGCON today on the subject, the CIA barely give even the tiniest bit of a shit who they kill. Hell, they blew some dude up to get some other guy to go to his funeral so they could blow that up! 80+ collateral murders and they don't even know if they got the guy they wanted. Even Microsoft couldn't write a missile targeting system that fucked up. Bring it on!

    http://www.drones.pitchinteractive.com/ [pitchinteractive.com]

  • (Score: 0) by Anonymous Coward on Sunday November 16 2014, @03:54AM

    by Anonymous Coward on Sunday November 16 2014, @03:54AM (#116310)

    "We don't see how these inanimate machines could understand or respect the value of life, yet they would have the power to determine when to take it away."

    Inanimate machines, of course, do not understand or respect the value of life. That is the job of those who build and use such machines. It is the people who use these machines that decide when to take life away. The real value of these intelligent munitions is that they can be given some "split second" power, on the fly, to make decisions on what to strike or not to strike. This could be particularly valuable in situations where the feedback loop between munition and human operator is too long to make that determination. In the best case scenario this could actually help to avoid "collateral damage".

    I can think of at least one obvious possibility where this could come in handy. Suppose, as the cruise missile comes in for a strike, its AI agent recognizes that there are children in the strike zone. If it could then at the last possible moment before impact disable the munition, or otherwise abort the strike avoiding a massacre of children, that would IMHO be a very good thing.

    • (Score: 2, Insightful) by pkrasimirov on Sunday November 16 2014, @08:45AM

      by pkrasimirov (3358) Subscriber Badge on Sunday November 16 2014, @08:45AM (#116343)

      The problem is not in the good-case scenario, as you might have guessed. The problem is when the op see children and switch off the connection to avoid responsibility. Somehow I doubt AI giving a flying fuck about children, pets, or literature. It knows about targets, decoys, and obstacles. Read about the funeral one post above.

    • (Score: 1) by NotSanguine on Sunday November 16 2014, @09:37AM

      by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday November 16 2014, @09:37AM (#116348) Homepage Journal
      So, "A just machine to make big decisions, programmed by fellas with compassion and vision." Is just a pipe dream? So we won't "be clean when their work is done, eternally free yes and eternally young?" With apologies to Donald Fagan.

      Yeah. Sorry, this whole SkyNet/AI takes over the world crap is just that. Presumably these autonomous weapons will use a variety of sensors to identify friend from foe (perhaps coded signals to identify friendly targets or spectrographic profiles) for which countermeasures will be developed. How effective the autonomous weapons or any countermeasures is an open question. But i do know that while life does sometimes imitate art, can't we be at least slightly realistic when imagining it?
      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 3, Interesting) by wantkitteh on Sunday November 16 2014, @03:13PM

      by wantkitteh (3362) on Sunday November 16 2014, @03:13PM (#116411) Homepage Journal

      The people who use these machines have no respect for the value of life either, and not in a Milgram Experiment transference of responsibility way either. They have no interest in avoiding collateral damage, there are documented cases of drone operations deliberately killing innocents, then following up with another strike to the same place half an hour later under the assumption that anyone who attends the scene to help is therefore a militant.

      The most tenuous information can be used to launch a deadly strike - former CIA/NSA director Michael Hayden said it flat out: "We kill people based on metadata". [rt.com] Anyone worried about a drone controlled by a computer that picks and prosecutes targets, the humans already doing this are already devoid of any respect for human life and take it away with reckless abandon, without due process or consideration of law or culpability, without humanity. They aren't soldiers or intelligence operatives any more, they're state-sponsored serial killers.

      • (Score: 0) by Anonymous Coward on Sunday November 16 2014, @05:31PM

        by Anonymous Coward on Sunday November 16 2014, @05:31PM (#116453)

        They aren't soldiers or intelligence operatives any more, they're state-sponsored serial killers.

        Isn't that the very definition of soldier?