Stories
Slash Boxes
Comments

SoylentNews is people

posted by azrael on Thursday November 13 2014, @08:26AM   Printer-friendly
from the reverse-polarity dept.

A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.

SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.

Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”

Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.

Related Stories

Elon Musk: "We are Summoning the Demon" 94 comments

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

Robot Weapons: What’s the Harm? 33 comments

Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by RobotMonster on Thursday November 13 2014, @08:34AM

    by RobotMonster (130) on Thursday November 13 2014, @08:34AM (#115474) Journal

    It's almost like they don't want my army of self-replicating robot spiders to take over the world :-(

  • (Score: 4, Interesting) by takyon on Thursday November 13 2014, @09:42AM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday November 13 2014, @09:42AM (#115490) Journal

    Forget autonomous drones.

    Do existing remote-controlled lethal drones with human-designated targets comply with international law?

    Do drone attacks comply with international law? [politifact.com]

    Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism [ohchr.org]

    Resolution adopted by the Human Rights Council. 25/22. Ensuring use of remotely piloted aircraft or armed drones in counterterrorism and military operations in accordance with international law, including international human rights and humanitarian law [un.org]

    1. Urges all States to ensure that any measures employed to counter terrorism, including the use of remotely piloted aircraft or armed drones, comply with their obligations under international law, including the Charter of the United Nations, international human rights law and international humanitarian law, in particular the principles of precaution, distinction and proportionality;

    United States of America Practice Relating to Rule 14. Proportionality in Attack [icrc.org]

    The US Air Force Commander’s Handbook (1980) states that “a weapon is not unlawful simply because its use may cause incidental or collateral casualties to civilians, as long as those casualties are not foreseeably excessive in light of the expected military advantage”.

    The US Instructor’s Guide (1985) states: "In attacking a military target, the amount of suffering or destruction must be held to the minimum necessary to accomplish the mission. Any excessive destruction or suffering not required to accomplish the objective is illegal as a violation of the law of war."

    The US Naval Handbook (2007) states: "The principle of proportionality is directly linked to the principle of distinction. While distinction is concerned with focusing the scope and means of attack so as to cause the least amount of damage to protected persons and property, proportionality is concerned with weighing the military advantage one expects to gain against the unavoidable and incidental loss to civilians and civilian property that will result from the attack. The principle of proportionality requires the commander to conduct a balancing test to determine if the incidental injury, including death to civilians and damage to civilian objects, is excessive in relation to the concrete and direct military advantage expected to be gained. Note that the principle of proportionality under the law of armed conflict is different than the term proportionality as used in self-defense. It is not unlawful to cause incidental injury to civilians, or collateral damage to civilian objects, during an attack upon a legitimate military objective. The principle of proportionality requires that the anticipated incidental injury or collateral damage must not, however, be excessive in light of the military advantage expected to be gained. A military objective within a city, town, or village may, however, be bombarded if required for the submission of the enemy with the minimum expenditure of time, life, and physical resources. The anticipated incidental injury to civilians, or collateral damage to civilian objects, must not be excessive in light of the military advantage anticipated by the attack."

    Drones pose no risk (other than PTSD [nytimes.com]) to the life of the drone operator. It is known that drone bombings, however precisely targeted they are claimed to be, inevitably result in civilian casualties [theguardian.com]. But drones can act in ways that would be risky for human-piloted aircraft. They can linger or hover in dangerous areas without risking the life of a pilot, and they may be less noisy and more stealthy due to their lack of a cockpit. Therefore, drones that kill are convenient, yet not necessarily proportional, given that they may be engineered to subdue targets using nonlethal capabilities at no risk to a nation's soldiers and pilots.

    What kinds of nonlethal capabilities? Rubber bullets or chemical weapons are a possibility [texasobserver.org], but probably not very effective at stopping targets from fleeing. Sticky foam [defensetech.org] could make a comeback given some R&D. A flying Active Denial System [wikipedia.org] (microwave weapon) probably wouldn't work due to size and power requirements. Sonic weaponry [wikipedia.org]? Better to burst a target's eardrums than kill them.

    What do you do with an incapacitated target? You work with the host country. We've conducted strikes in Pakistan, Yemen, and recently "Iraq" again. They can provide boots on the ground to pick up the targets. If that can't be done, perhaps it's time to end U.S. involvement in these conflict zones.

    Good luck convincing the Obama administration or a future neoconservative administration [nytimes.com] to ditch the $50k+ Hellfire missiles and convenient lethal drone strike capabilities that don't necessarily make America safe [nytimes.com].

    • (Score: 1) by BK on Thursday November 13 2014, @04:46PM

      by BK (4868) on Thursday November 13 2014, @04:46PM (#115591)

      Forget autonomous drones.

      Do existing remote-controlled lethal drones with human-designated targets comply with international law?

      ...yet not necessarily proportional, given that they may be engineered...

      You seem to be suggesting that the use of remote controlled weapon systems are against "international law" because it might be possible for someone, somewhere, someday, to design and use one was that would be better [youtube.com]/more accurate/more precise/less lethal/painted with rainbows. I mean logically, that's great I guess... but taken to its natural conclusion you are suggesting that virtually everything, or at least every weapons system, is against international law because it could be better in some way.

      I don't think that international law works like that.

      --
      ...but you HAVE heard of me.
    • (Score: 1) by khallow on Thursday November 13 2014, @04:50PM

      by khallow (3766) Subscriber Badge on Thursday November 13 2014, @04:50PM (#115593) Journal
      That's a pretty thin argument. Nonlethal weapons and those "boots on the ground" are both notoriously unreliable. I think the real problem here is the lack of accountability due to the secrecy of the operations. For all I know, those drone strikes protect someone's heroin operation.
  • (Score: 0) by Anonymous Coward on Thursday November 13 2014, @10:43AM

    by Anonymous Coward on Thursday November 13 2014, @10:43AM (#115497)

    The fact that they are even debating this at all is scary.
    There is NO debate.

    "Intelligent weapons systems are intended to reduce the risk to both innocent bystanders..."

    Gotta love the word "intended".

    The road to hell is paved with good intentions.

    • (Score: 0) by Anonymous Coward on Thursday November 13 2014, @11:59AM

      by Anonymous Coward on Thursday November 13 2014, @11:59AM (#115507)

      Good intentions indeed. You try to host a corporate news site for your jolly club of illiterate idiots, and a bunch of free thinking antichrists post such disagreeable things to your comments section! You try to ban them, but you know they'll be bach.

      The trolls are out there! They can't be bargained with. They can't be reasoned with. They don't feel pity, or remorse, or fear. And they absolutely will not stop!

      Due to excessive bad posting from this IP or Subnet, comment posting has temporarily been disabled.

      HAHAHAHAHAHAHA

    • (Score: 1) by albert on Thursday November 13 2014, @06:30PM

      by albert (276) on Thursday November 13 2014, @06:30PM (#115621)

      Oddly, I get the feeling you don't agree with the conclusion that such systems are obviously very important to deploy in great numbers ASAP.

      The world is populated by countries with good weapons because other countries cease to exist, not counting those that have a slave-like relationship to a powerful ally and those that have nothing of value.

      It's like evolution, but with countries. You will live in a country protected by these weapons. The only questions: Will your government be a direct descendant of your current government? Will these weapons be under direct control of your government, or will they be controlled by a government that supports the existence of your country in exchange for something? (you aren't likely to be in a zero-value country if you can post to soylentnews)

      • (Score: 0) by Anonymous Coward on Friday November 14 2014, @12:17AM

        by Anonymous Coward on Friday November 14 2014, @12:17AM (#115712)

        Didn't even think of this.
        It is even scarier now that you mention it.

        Who will be future curators of said AI weapons?

      • (Score: 1) by anubi on Friday November 14 2014, @02:32AM

        by anubi (2828) on Friday November 14 2014, @02:32AM (#115751) Journal

        I get the idea the whole world governments will be controlled by the banking elite, who need unlimited force to back their claims to owning the world. The whole world is in debt to them to repay interest on the dollars which only the bankers have the charter to print from thin air.
         
        Now, how one can rightfully demand usury on that which they never had in the first place escapes me, but the inevitable outcome of allowing selected entities to print currency also means they will end up owning everything.

        This has happened before. From all I can tell, all of our wars and social uprisings are the result of the wealth balance shifting too much to too few - not as reward for work, but as rights transferred through financial manipulation.
         
        The elite are getting so few yet so wealthy that they have few people they can trust, so they need machines - with the soul of a machine - to back up their pens.

        That way, they can be 'nice', knowing that anyone who disagrees with them can simply be cancelled.

        --
        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
      • (Score: 2) by Magic Oddball on Friday November 14 2014, @10:09AM

        by Magic Oddball (3847) on Friday November 14 2014, @10:09AM (#115838) Journal

        The only questions: Will your government be a direct descendant of your current government? Will these weapons be under direct control of your government, or will they be controlled by a government that supports the existence of your country in exchange for something?

        I don't think those are (or should be) the only questions by a longshot.

        I see a much more crucial question as a civilian: will the weapons my government acquires be used elsewhere in the world in actual battles, elsewhere to subjugate inconvenient civilians, or will they be used domestically to subjugate our own civilians? Will the weapons be only available to the military, or will the National Guard be allowed to (ab)use them, or will it be like so much other military tech and end up in the hands of the same militarized police forces we already have good reason to fear?

        • (Score: 1) by albert on Saturday November 15 2014, @08:21AM

          by albert (276) on Saturday November 15 2014, @08:21AM (#116157)

          Of course these weapons will be used.

          Either your current government (or protector government) does this, or it is replaced by one that does.

          I suppose it is possible to use these weapons only between nations instead of within a nation, but that seems unlikely.

  • (Score: 1, Flamebait) by WizardFusion on Thursday November 13 2014, @12:51PM

    by WizardFusion (498) Subscriber Badge on Thursday November 13 2014, @12:51PM (#115515) Journal

    Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops

    Will it do a better job than the trigger-happy yanks.? They kill more friendly troops than anyone.

  • (Score: 3, Insightful) by bzipitidoo on Thursday November 13 2014, @01:56PM

    by bzipitidoo (4388) on Thursday November 13 2014, @01:56PM (#115540) Journal

    We don't yet have the ability to make completely autonomous weapons, but we're close. We're nearly to the point where anyone could mount weaponry on a self-driving car, or on a cute robot doggie. These things will get smarter. Or perhaps some kid's experiment could unleash Grey Goo.

    Discussing problems like this is one of the best uses the UN could make of its time. If no one has considered the problems, we could be in for some ugly surprises. Imagine a major power thinking to gain a decisive military advantage by employing weaponized robots against powers that have not developed such capabilities. At least mines just lay in the dirt. Having hordes of autonomous, replicating robots still wandering around and killing after the war is over would force big changes on everyone.

    One relevant Star Trek episode: The Doomsday Machine.

    • (Score: 0, Troll) by albert on Thursday November 13 2014, @06:18PM

      by albert (276) on Thursday November 13 2014, @06:18PM (#115617)

      Imagine a major power thinking to gain a decisive military advantage by employing weaponized robots against powers that have not developed such capabilities.

      OK, I imagined that. I'm liking it an awful lot. What, you expected differently? This would be of tremendous benefit to my country. We wouldn't have so many soldiers dying. We could better deal with the horrible uncivilized places.

  • (Score: 2) by mcgrew on Thursday November 13 2014, @02:12PM

    by mcgrew (701) <publish@mcgrewbooks.com> on Thursday November 13 2014, @02:12PM (#115547) Homepage Journal

    Robots say "I'll be back."

    This shit never goes away.

    --
    Free Martian whores! [mcgrewbooks.com]
  • (Score: 2) by Thexalon on Thursday November 13 2014, @02:40PM

    by Thexalon (636) on Thursday November 13 2014, @02:40PM (#115558)

    The problem in a nutshell is that a lot of militaries, including the supposed good guys, would love to have one of these:

    DeSedasky: "The doomsday machine. A device which will destroy all human and animal life on earth. It is not a thing a sane man would do. The doomsday machine is designed to to trigger itself automatically. It is designed to explode if any attempt is ever made to untrigger it."
    Muffley: "Automatically? ... But, how is it possible for this thing to be triggered automatically, and at the same time impossible to untrigger?"
    Strangelove: "Mr. President, it is not only possible, it is essential. That is the whole idea of this machine, you know. Deterrence is the art of producing in the mind of the enemy... the fear to attack. And so, because of the automated and irrevocable decision making process which rules out human meddling, the doomsday machine is terrifying. It's simple to understand. And completely credible, and convincing."

    I could definitely also imagine military guys wanting something like ED-209.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.