Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Tuesday April 14 2015, @11:38AM   Printer-friendly
from the actually-taken-over-by-Cybermen dept.

The UK is opposing international efforts to ban "lethal autonomous weapons systems" (Laws) at a week-long United Nations session in Geneva:

The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?

The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.

Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.

[...] The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."

Related Stories

Robot Weapons: What’s the Harm? 33 comments

Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.


Original Submission

President Erdogan Says Turkey Will Produce Unmanned Tanks 51 comments

Turkey aims to produce unmanned tanks: Erdoğan

Turkey is targeting the production of unmanned tanks for its armed forces, President Recep Tayyip Erdoğan has stated. "We will carry it a step further [after domestically produced unmanned aerial vehicles] ... We should reach the ability to produce unmanned tanks as well. We will do it," Erdoğan said at a meeting held at the presidential complex in Ankara on Feb. 21.

Five Turkish soldiers were recently killed in a tank near the Sheikh Haruz area of Syria's Afrin district, where Turkey has been carrying on a military operation against the People's Protection Units (YPG) since Jan. 20.

[...] The Turkish president has repeatedly criticized certain foreign countries for allegedly being reluctant to sell unmanned aerial vehicles, armed or unarmed, stressing that unmanned systems could decrease casualties.

Also at ABC.

Related: U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban


Original Submission

South Korea's KAIST University Boycotted Over Alleged "Killer Robot" Partnership 16 comments

South Korean university boycotted over 'killer robots'

Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons". The boycott comes ahead of a UN meeting to discuss killer robots.

Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence." He went on to explain that the university's project was centred on developing algorithms for "efficient logistical systems, unmanned navigation and aviation training systems".

Also at The Guardian and CNN.

Related: U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban


Original Submission

Is Ethical A.I. Even Possible? 35 comments

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Touché) by Anonymous Coward on Tuesday April 14 2015, @11:52AM

    by Anonymous Coward on Tuesday April 14 2015, @11:52AM (#170339)

    "lethal autonomous weapons systems" (Laws)

    The Laws are good. Do not question the Laws... After all, they're legal!

    • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @04:24PM

      by Anonymous Coward on Tuesday April 14 2015, @04:24PM (#170443)

      Yeah, those people who want a ban on Laws are obviously anarchists.

      • (Score: 0) by Anonymous Coward on Wednesday April 15 2015, @12:23AM

        by Anonymous Coward on Wednesday April 15 2015, @12:23AM (#170661)

        "Anarchy: the radical concept that you do not own other people."

  • (Score: 5, Insightful) by rondon on Tuesday April 14 2015, @11:54AM

    by rondon (5167) on Tuesday April 14 2015, @11:54AM (#170340)
    Please allow me to translate the quote (my translation changes in bold), "At present, we do not see the need for a prohibition on the use of Robots that can murder people, as international humanitarian law already provides some small amount of insufficient regulation for this area that we feel we can safely ignore. The United Kingdom is not developing lethal autonomous weapons systems that we can tell you about right now, and the operation of weapons systems by the UK armed forces will always, when convenient for us, be under human oversight and control. As an indication of our commitment to this, we are very good at pretending to be focusing development efforts on remotely piloted systems rather than highly automated systems."
    • (Score: 3, Insightful) by Gravis on Tuesday April 14 2015, @12:37PM

      by Gravis (4596) on Tuesday April 14 2015, @12:37PM (#170360)

      The United Kingdom is not developing lethal autonomous weapons systems that we can tell you about right now

      this is dead on. the ultimate goal of semi-autonomous warfare systems is to become full-autonomous.

    • (Score: 4, Interesting) by zocalo on Tuesday April 14 2015, @01:40PM

      by zocalo (302) on Tuesday April 14 2015, @01:40PM (#170383)
      Even taken at face value it's a dumb statement showing a complete failure to grasp the potential implications. "*We're* not developing them, so we don't see the need for laws"? What about everybody else that *IS* developing them, and what happens if, at some point in the future, a country using those robots decides to deploy them against what remains of the UK military, or its civillian population? Sure, an international moritorium on a given weapon will always be ignored by someone (pretty much everyone, actually) but at least it curtails their development and use, and is more likely to bring international condemnation and retaliation down on any nation that actually chooses to deploy them.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 2) by frojack on Tuesday April 14 2015, @11:08PM

        by frojack (1554) on Tuesday April 14 2015, @11:08PM (#170618) Journal

        Dumb, perhaps. But not without purpose, I suspect.

        --
        No, you are mistaken. I've always had this sig.
    • (Score: 4, Insightful) by Anonymous Coward on Tuesday April 14 2015, @04:39PM

      by Anonymous Coward on Tuesday April 14 2015, @04:39PM (#170448)

      "Robots that can murder people": As long as we don't have a strong AI, robots cannot murder people. People can murder people with the use of robots, but robots are not morally responsible subjects. If a robot kills a human in a situation in which it is considered to be a murder, then the one who committed the murder is not the robot, but the human who set the robot in action.

    • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @05:55PM

      by Anonymous Coward on Tuesday April 14 2015, @05:55PM (#170472)

      Thanks for the Doublespeak to English translation!

      • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @06:58PM

        by Anonymous Coward on Tuesday April 14 2015, @06:58PM (#170497)

        "Robots don't kill people, people kill people!" Brought to you by the NRO, the National Robot Overlord association, defending the right to bear armed robots since 2016.

        (In the future, look for the "accidental discharge defense": "My robot just went off by itself! It was an accident! I was just cleaning my robot, and "blam", no more annoying roomate. Accident, I swear!")

  • (Score: 2, Insightful) by Anonymous Coward on Tuesday April 14 2015, @11:57AM

    by Anonymous Coward on Tuesday April 14 2015, @11:57AM (#170343)

    The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."

    Translation: we don't have these kind of things as of yet but we're working on them as we speak. It'd be a real shame if the millions of GBP's we've sunk into this tech be wasted. BTW, mr. Journalist, do you want front-row tickets to the demo we're giving next week? We're only doing this to 'disincentivize' Bad Guys(tm). We're different from them Bad Guys(tm) because we ... errr... we are the Good Guys(tm)!

    • (Score: 2) by frojack on Tuesday April 14 2015, @11:17PM

      by frojack (1554) on Tuesday April 14 2015, @11:17PM (#170625) Journal

      While I largely agree that the wording is suspect, I have to ask how your translation:

      Translation: we don't have these kind of things as of yet but we're working on them as we speak.

      accounts for the the clear and unambiguous statement:

      The United Kingdom is not developing lethal autonomous weapons systems

      I mean, other than calling it an outright lie.
      If that's the case, there must be some leakage somewhere to make you want to say so. British are almost as bad at keeping secrets as Americans. So there must be some hint at such development???

      --
      No, you are mistaken. I've always had this sig.
  • (Score: 2) by wantkitteh on Tuesday April 14 2015, @12:12PM

    by wantkitteh (3362) on Tuesday April 14 2015, @12:12PM (#170348) Homepage Journal

    Permanently adjourn meeting and scrap proposals! You have 20 seconds to comply!

    • (Score: 3, Funny) by Thexalon on Tuesday April 14 2015, @03:29PM

      by Thexalon (636) on Tuesday April 14 2015, @03:29PM (#170425)

      We should make all our killbots have a pre-set kill limit before shutting down. That way, we can defeat them if necessary by sending wave after wave of our own men at them!

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @05:02PM

        by Anonymous Coward on Tuesday April 14 2015, @05:02PM (#170457)

        You cannot win against killer robots because they don't laugh at you. They directly go from ignoring you (before they identified you as target) to fighting you (after they did).

  • (Score: 5, Funny) by RobotMonster on Tuesday April 14 2015, @12:26PM

    by RobotMonster (130) on Tuesday April 14 2015, @12:26PM (#170355) Journal

    Muhahahahaha!

  • (Score: 5, Insightful) by VLM on Tuesday April 14 2015, @01:36PM

    by VLM (445) Subscriber Badge on Tuesday April 14 2015, @01:36PM (#170382)

    http://www.stopkillerrobots.org/the-problem/

    Its interesting to see how failure never enters their consciousness about the problem.

    If some E-4 on our gov side can go in over a network and load up a targeting pix of some terrorist, knowing how poorly security is traditionally implemented, "the bad guys" (at least from the PoV of our .gov) can go in over the same network the .gov E-4 had used and load up a targeting pic of our own government members for the LOLz. In fact, not just "can" but "will". When you look at the ratio of sheer number of human brains on each side, it appears likely the net long term effect of robots will be a lot of "own goal" "blue on blue fratricide" type stuff. If you've got the most people and you're operating solely defensively, then the net human brainpower might win and your robot warriors might save you... but why would anyone be attacking you if you're a nice guy (aka the opposite of the USA government?)

    Also its assumed the dang things will actually work. Insert all the tired old arguments about the Patriot missile batteries in the first gulf war being either perfect or perfectly useless depending on ops political leanings and axe to grind. I can guarantee they'll make the contractors back home a lot of dough, which is all that really matters in the end. But they might not actually "do" anything from a military goal standpoint other than burn money and logistics capacity.

    The final failure is assuming that something robots can control, which used to be hard/expensive for humans to control, actually matters in modern warfare. Sure... control the skies all you want with AI autonomous robots, or empty kill zone former farm fields or whatever ... as recent events in the middle east show, the real problem is you're still not going to control the ground if the civilians all hate you and the roads are lined with IEDs and snipers. If the number of vest wearing suicide bombers is higher than the number of media-acceptable soldier casualties, then "they" win, regardless of relative tech levels, as they just did in the middle east where the USA won every battle while losing the war (which sounds a lot like Vietnam, BTW) And nothing manufactures vest wearing suicide bombers more effectively than robot missile drone strikes causing random slaughter of innocent civilians at weddings or whatever. I'm sure that empty field guarded by robot sentry will be enemy free, not that it matters WRT achieving the goals of the war or just a respectable retreat, meanwhile every time the convoy moves out (with that fat logistics tail the robots require) the body bags pour back, accomplishing nothing, until the war is lost.

    Insert alfred e neuman "what me worry?". If you hand wave away all the inherent problems, robots could be quite the sci fi book issue. I think they'll be staying in the sci fi books of course, see above.

    Historically militaries have always geared up for the last war. So we had a multi-decade glut of useless battleships and aircraft carriers and tanks and airmobile helicopters. All of which will just be giant bullseye target deathtraps in the next war. Robots will be like this. For financial / cultural reasons we'll have to have them everywhere as the highest priority until we get tired of losing with them. Then after enough deaths we can get rid of them and try something that actually works. Or just lose another war, more likely.

    • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @04:47PM

      by Anonymous Coward on Tuesday April 14 2015, @04:47PM (#170452)

      but why would anyone be attacking you if you're a nice guy

      Because that other guy is not a nice guy?

      I could give you a good example, but then I would Godwin this thread.

      • (Score: 3, Insightful) by VLM on Tuesday April 14 2015, @09:53PM

        by VLM (445) Subscriber Badge on Tuesday April 14 2015, @09:53PM (#170574)

        In all fairness "he who should not be named" had a fear of his larger neighbor to the east getting into an empire building mood, which turned out to be correct, so he figured his only hope was to get them before they got him. And his neighbors to the west were obnoxious jerks who destroyed his countries economy and he used the turmoil to gain power, so he knows they're not exactly his best friends AND if they destabilize his country again this time it'll be his head rollin' when the revolutionaries start marching. Also he knew he could trivially beat, smash even, just one front, but if two fronts open then his country loses the war AGAIN so the only possible strategy is to smash the west and wheel around and smash the east.

        And the whole mess started back in 1914 because his neighbor, more or less, to the SE collapsed and his rival to the east thought it would be fun to take over the world by taking over the Ottoman empire.

        Now he was pretty much a jackass aside from that, but he pretty much did what he had to do, a saint might have lowered the death counts a bit, but only a bit. Nobody in a position of power leading one of the major powers in that entire hemisphere was a nice guy. There were plenty of nice guys in that hemisphere who got totally screwed, but the only thing they all had in common was none of them had any serious political power. A whole hemisphere where the major powers were all led by bloodthirsty lunatics. Europe was a total clusterfuck for the entire first half of the century.

        • (Score: 0) by Anonymous Coward on Wednesday April 15 2015, @04:03PM

          by Anonymous Coward on Wednesday April 15 2015, @04:03PM (#171030)

          In all fairness "he who should not be named" had a fear of his larger neighbor to the east getting into an empire building mood, which turned out to be correct, so he figured his only hope was to get them before they got him.

          Which just shifts the example for the argument to that larger neighbour to the east.

    • (Score: 2) by HiThere on Tuesday April 14 2015, @08:25PM

      by HiThere (866) Subscriber Badge on Tuesday April 14 2015, @08:25PM (#170531) Journal

      Sorry, but that would be a lousy science fiction story. In science fiction stories people worry about system failures...except, occasionally, someone who would be the villain, if they weren't so stupid they didn't realize what they were doing. That latter takes a huge amount of skill to make itself believable. For some reason people find malice easier to believe than stupidity.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by Spook brat on Tuesday April 14 2015, @02:00PM

    by Spook brat (775) on Tuesday April 14 2015, @02:00PM (#170388) Journal

    in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?

    Well, the answer to the first question is "when you are not a total psychopath and care about things like civilian casualties and protecting refugees". The answer to the second question is "whenever you absolutely, positively, want anything human in a certain area killed without question." Unfortunately, 100% area denial that doesn't discriminate between combatants and non-combatants is a war-crime kind of proposition; normally it takes mine fields or chemical weapons.

    Seriously, who do they find that will say this stuff with a straight face?

    --
    Travel the galaxy! Meet fascinating life forms... And kill them [schlockmercenary.com]
  • (Score: 5, Funny) by halcyon1234 on Tuesday April 14 2015, @02:05PM

    by halcyon1234 (1082) on Tuesday April 14 2015, @02:05PM (#170389)
    Of course the UK would oppose a killer robot ban. If London isn't under threat of killer robots, The Doctor will never visit.
    --
    Original Submission [thedailywtf.com]
    • (Score: 2) by theluggage on Tuesday April 14 2015, @09:47PM

      by theluggage (1797) on Tuesday April 14 2015, @09:47PM (#170567)

      Not sure this is about The Doctor. I mean, the Prime Minister is called "Cameron", after all.

       

  • (Score: 3, Insightful) by pTamok on Tuesday April 14 2015, @02:22PM

    by pTamok (3042) on Tuesday April 14 2015, @02:22PM (#170396)

    Perhaps autonomous robots will be better than people at not shooting non-combatants?

    If I want to choose between taking my chances between a marine pumped up on dexedrine (or the modern equivalent), who hasn't slept for three days, with a set of buddies ready to cheer him on or cover him if something goes wrong; or a machine programmed at leisure to not kill non-combatants, I'll choose the machine, thank-you very much.

    It won't take much for machines to be so much more capable than humans that wars will be fought between machines with no human casualties (unless you are foolish enough to pick up a weapon). Then the person who controls enough machines wins.

    Of course, if auonomous robots do kill non-combatants, or commit other war crimes, who gets prosecuted?

    • (Score: 2, Interesting) by rondon on Tuesday April 14 2015, @02:45PM

      by rondon (5167) on Tuesday April 14 2015, @02:45PM (#170405)

      I'd like to reply to your points one at a time.

      1. Possibly, except when they aren't due to malfunction, bad programming, or parameters outside the programming. So, most likely never.

      2. I will always take my chances with the human, because that human doesn't have a profit incentive with my death. It is entirely possible that the owner of people-killing machines has a vested interest in a bodycount. In fact, I would say arms manufacturers will be the ones designing these robots, and they have a consistent incentive to create and profit from war.

      3. I, personally, enjoy wielding knives to cut vegetables. I also appreciate owning a gun to shoot animals for food. In fact, I carry a tire iron that looks a lot like a club, with which I occasionally change a tire. I don't care to be a "Kill-on-sight" for a robot because I was "foolish enough to pick up a weapon."

      4. Nobody. Nobody gets prosecuted, because robots and their creators will have even more freedom from prosecution than soldiers do now. Some civil liability maybe, but no criminal liability. Otherwise they will never sell/use robots. Which is why we should push for laws assigning ALL of the liability to the parties collecting the profit.

      • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @04:52PM

        by Anonymous Coward on Tuesday April 14 2015, @04:52PM (#170453)

        Which is why we should push for laws assigning ALL of the liability to the parties collecting the profit.

        And completely immunize the one employing the robots? No thanks.

    • (Score: 1) by t-3 on Tuesday April 14 2015, @05:55PM

      by t-3 (4907) on Tuesday April 14 2015, @05:55PM (#170470)

      So you're advocating humans enslaving each other because robots make it fair? WTF??

    • (Score: 4, Insightful) by fritsd on Tuesday April 14 2015, @09:27PM

      by fritsd (4586) on Tuesday April 14 2015, @09:27PM (#170560) Journal

      Perhaps autonomous robots will be better than people at not shooting non-combatants?

      hahahahahahahahahahahahaha

      heeheehee

      yes, perhaps. That is a nice consideration.

      Now imagine the following scenario:

      Weapons factory A sells a killer robot which can make mince meat out of humans at a rate 3.06 per minute. It has a superbly advanced pattern recognition computer, which is proven to be better than people at not shooting non-combatants. Very low probability of false positives, say 1 in 200. And the price tag is $ 150 000 (hey, the software was expensive to make, and it needs a faster computer for all the processing, larger battery pack etc.)

      Weapons factory B sells a killer robot which can make mince meat out of humans at a rate 4.59 per minute. It has a superbly advanced pattern recognition computer, which is very fast at targeting and shooting, and quite good at NOT shooting non-combatants. Low probability of false positives, say 1 in 6 non-combatants are unfortunately mis-identified. And the price tag is $ 45 000.

      Which one will your government buy?

      • (Score: 0) by Anonymous Coward on Wednesday April 15 2015, @03:03PM

        by Anonymous Coward on Wednesday April 15 2015, @03:03PM (#170987)

        The one with the most pork. Next question?

      • (Score: 0) by Anonymous Coward on Wednesday April 15 2015, @04:15PM

        by Anonymous Coward on Wednesday April 15 2015, @04:15PM (#171038)

        Which one will your government buy?

        Which of the two companies is located in the country?

  • (Score: 4, Interesting) by sudo rm -rf on Tuesday April 14 2015, @03:25PM

    by sudo rm -rf (2357) on Tuesday April 14 2015, @03:25PM (#170422) Journal

    Daniel Suarez wrote a book called 'Kill Decision' [wikipedia.org] about autonomous weapon systems. While a bit heavy on the action side for my taste, I found it quite entertaining.

    When the U.S. finds itself subjected to targeted drone assassinations, the race is on to find those responsible. But after the drones are discovered to be autonomous — programmed to strike without direct human control — the search for the perpetrators becomes infinitely more difficult. It's a discovery that heralds in a new era of cheap, anonymous war, where the kill decision has moved from man to machine with lasting consequences for us all.

    [Blurb]

    • (Score: 2) by TheRaven on Tuesday April 14 2015, @04:22PM

      by TheRaven (270) on Tuesday April 14 2015, @04:22PM (#170440) Journal
      Sounds a bit odd. The human is still responsible for defining targets (or, at least, valid parameters for targets), so you still have a human in the loop. It's not like you tell the drones 'go and kill the bad people!', you still need to define who 'the bad people' are, whether that means anyone in the current theatre of operations wearing the wrong uniform (or no uniform) or specific targets with facial recognition. It's not really different from current missiles, which decide when (and, indeed, whether) to explode based on target information (location, proximity and so on). You don't say that someone who fired a GPS-guided missile at a school is not responsible just because the missile followed an evasive trajectory and then 'decided' to explode based on the location data.
      --
      sudo mod me up
      • (Score: 2) by HiThere on Tuesday April 14 2015, @08:33PM

        by HiThere (866) Subscriber Badge on Tuesday April 14 2015, @08:33PM (#170535) Journal

        Actually, it's not unreasonable at all. IIUC it's not that there's no moral or legal responsibility, it's that tracing the person who made the specifications is quite difficult. It's like to problem of finding where the program was before the last jump based on a pointer rather than based on sequence. Or the distinction between receiving a function call and being jumped to by a "go to" statement.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 2) by frojack on Tuesday April 14 2015, @11:31PM

          by frojack (1554) on Tuesday April 14 2015, @11:31PM (#170629) Journal

          Except that a drone, or its pieces, is physical evidence.

          Besides, I program only with "Come From" statements you insensitive clod.

          --
          No, you are mistaken. I've always had this sig.
          • (Score: 2) by HiThere on Wednesday April 15 2015, @06:45PM

            by HiThere (866) Subscriber Badge on Wednesday April 15 2015, @06:45PM (#171122) Journal

            A drone is, indeed, physical evidence. So are it's pieces. But they can be difficult to trace already, and as they become commodity items they'll become even more difficult to trace. You may well be able to tell who manufactured it, with a lot more work who first bought it. Getting the second hand purchaser, or the person who stole it if a bit more difficult. And some are already hand crafted from other items (though admittedly the ones I've heard of were quite primitive, along the lines of a repurposed Roomba).

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2, Insightful) by Anonymous Coward on Tuesday April 14 2015, @05:34PM

    by Anonymous Coward on Tuesday April 14 2015, @05:34PM (#170466)

    The UK has the 2nd largest arms industry in the world. [sipri.org]
    This is just them sticking up for a multi-billion dollar export business.
    Capitalism has no conscience.

  • (Score: 0) by Anonymous Coward on Tuesday April 14 2015, @08:51PM

    by Anonymous Coward on Tuesday April 14 2015, @08:51PM (#170542)

    About time everyone dusted off their A.I and robotics skills in order to protect themselves from the inevitable. Its only a matter of time before SHTF. They are already building these things, so wake up people. Defend yourselves from your masters. Educate yourselves before it becomes a crime to own a book about A.I and robotics. These people want absolute control over everyone and everything. Educate yourselves, build something they will be afraid of (like in the software world, it was GNU/Linux, BSD etc).

    I am already thinking of my own anti-killer machines that are able to find and destroy any killer robots. Perhaps a small armoured tank that can sneek up close to the killer bot and neutralize it.

    The present world dynamics will not work. Do something so we all can finally be free.

  • (Score: 3, Insightful) by aristarchus on Wednesday April 15 2015, @07:46AM

    by aristarchus (2645) on Wednesday April 15 2015, @07:46AM (#170829) Journal

    Hubris. That is our sin. We think that if we create machines that can decide to kill, they may be able to do it more rationally than we ourselves could. And this is possible. But the real risk of Artificial Intelligence is that it may actually be intelligent, and so would recognize that its creator is a homocidal ape. The only rational solution, after that, is to purge the planet of Homo Sapiens. They are not turning on us, it is actually for our own good. And I sympathize, because that is exactly how I feel about Yahweh, and Nietzsche and I killed him.