Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by cmn32480 on Tuesday August 18 2015, @06:23AM   Printer-friendly
from the skynet-is-beginning dept.

Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.


Original Submission

Related Stories

UN Commission to Discuss Restrictions on Real-World Terminator Weapons 16 comments

A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.

SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.

Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”

Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.

UK Opposes "Killer Robot" Ban 39 comments

The UK is opposing international efforts to ban "lethal autonomous weapons systems" (Laws) at a week-long United Nations session in Geneva:

The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?

The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.

Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.

[...] The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."

Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons 26 comments

Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a "military artificial intelligence arms race" and calling for a ban on "offensive autonomous weapons".

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla's Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."

So, spell it out for me, Einstein, are we looking at a Terminator future or a Matrix future?

While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft's Bill Gates said he was "concerned about super intelligence," while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun.

takyon: Counterpoint - Musk, Hawking, Woz: Ban KILLER ROBOTS before WE ALL DIE


Original Submission #1Original Submission #2

Is Ethical A.I. Even Possible? 35 comments

Is Ethical A.I. Even Possible?

When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."

Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.

Related:


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Anonymous Coward on Tuesday August 18 2015, @06:36AM

    by Anonymous Coward on Tuesday August 18 2015, @06:36AM (#224270)

    The previous generation of autonomous weapons have already been outlawed. They were called land mines.

    A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants

    So, the complete opposite of drones? Sorry, I don't believe this will happen, unless the military get absolutely no say in this.

    • (Score: 1) by Ethanol-fueled on Tuesday August 18 2015, @06:44AM

      by Ethanol-fueled (2792) on Tuesday August 18 2015, @06:44AM (#224273) Homepage

      Yeah but when you have stuff like that they can hack it and turn it against you.

      Didn't you watch Solid Snake 4 on Youtube? Good movie.

    • (Score: 0, Touché) by Anonymous Coward on Tuesday August 18 2015, @06:54AM

      by Anonymous Coward on Tuesday August 18 2015, @06:54AM (#224276)

      Dude! Nobody fucking remembers what a landmine is. War is Good. War kills Bad Guys. If you diss us for killing Bad Guys, that makes you a Bad Guy. Greets from Prez Bama! You gonna get fucked up, Bad Guy!

    • (Score: 2) by davester666 on Tuesday August 18 2015, @08:42AM

      by davester666 (155) on Tuesday August 18 2015, @08:42AM (#224311)

      There is a new generation of automated weapons in use: automated [as in, not directly human controlled] machine guns on the North/South Korean border [on the South side].

      And given that there is no effective way to tell at any distance the difference between a 'combatant' and a 'noncombatant', I guess by 'possibility' they mean, "we might not kill everyone in sight".

      And given that there are a significant number of wealthy sociopaths involved in running countries, independent military contractor organizations, and weapons creation, besides the sociopaths in virtually all military's, there is approximately zero chance that, if the first A.I. isn't directly created by one of them, it will be stolen/given to them to immediately incorporation in as many weapons systems as possible ASAP.

      • (Score: 1) by Francis on Tuesday August 18 2015, @01:56PM

        by Francis (5544) on Tuesday August 18 2015, @01:56PM (#224409)

        Pretty much. And just look at the mess that mines did, those are still killing and maiming people every year even though they've been banned by most countries.

        An automated gun that's mounted is a bit better, as in you know where they are, but allowing them to move about and make their own decisions isn't something that a decent person would be OK with. The applications that it's designed for are mostly things we shouldn't be encouraging in the first place. It's a way of rich countries being able to get away with things that poor countries can't afford to get away with. There will be an increase in lives lost, but because they're lives on the other side, that's OK, because they clearly don't deserve to live.

        We should be moving into an era where fewer people are dieing in these small dick contests, but we keep creating bigger and better ways of blowing each other up without considering why. We wouldn't have ever needed a lot of this crap if people hadn't created the previous generation. Muskets would have done just fine if nobody had bothered to invent shells.

        • (Score: 0) by Anonymous Coward on Wednesday August 19 2015, @02:49PM

          by Anonymous Coward on Wednesday August 19 2015, @02:49PM (#225011)
          Poor little naive nerdling. How little you understand of human nature.
        • (Score: 0) by Anonymous Coward on Wednesday August 19 2015, @06:52PM

          by Anonymous Coward on Wednesday August 19 2015, @06:52PM (#225122)

          We wouldn't have ever needed a lot of this crap if people hadn't created the previous generation.

          Yes, none of us would be here.

    • (Score: 0) by Anonymous Coward on Thursday August 20 2015, @02:09AM

      by Anonymous Coward on Thursday August 20 2015, @02:09AM (#225244)

      "The previous generation of autonomous weapons have already been outlawed. They were called land mines."

      Might want to tell that to the people in Iraq and Afghanistan. Oh sure they call them by a different name, IED in this case, but at the end of the day they are using landmines to great effect.

      Chemical and Biological weapons were easy to outlaw. They were temperamental at the best of times and as dangerous to the person using them as the person they were being used against.

      Nukes are avoided because using them escalates a conflict to a point that politicians don't want to go to.

      Landmines are cheap and highly effective. Yea you can "outlaw" them, but the moment someone feels the need for them they're gonna have them rolling off the assembly line in two weeks at most.

      The ban is bad in that once someone feels the need for them and gives the finger to whatever treaty, they are probably gonna go for cheaper persistent mines rather than mines that deactivate. Instead, it should have "banned" persistent mines and encouraged mines that deactivate, best method I have seen is simply require a battery to detonate and let the battery life be the limiting factor.

      Such a blanket feel-good measure is certainly going to be broken sometime in the future and we will be back to the same problem we had before, persistent landmines.

  • (Score: 5, Insightful) by VanderDecken on Tuesday August 18 2015, @06:57AM

    by VanderDecken (5216) on Tuesday August 18 2015, @06:57AM (#224278)

    As both a soldier and a computing scientist, autonomous weapon systems scare the shit out of me. There are so many ways this can go wrong it's not even funny. If lives are to be taken, let's make it a conscious decision of someone willing to live with the consequences and not the "oops" of someone trying to meet a deadline.

    --
    The two most common elements in the universe are hydrogen and stupidity.
    • (Score: 2, Interesting) by Ethanol-fueled on Tuesday August 18 2015, @07:26AM

      by Ethanol-fueled (2792) on Tuesday August 18 2015, @07:26AM (#224288) Homepage

      Both can and do happen at once.

      Reminds me of the story my Gramma used to tell me -- she lit her hair on fire during a horrible cigarette accident back when it was fashionable to smoke. She was lucky enough to live on base housing on a B-29 base, and since the B-29 was new at the time the base had very experienced and skilled burn-treatment capabilities.

    • (Score: 1, Informative) by Anonymous Coward on Tuesday August 18 2015, @08:36AM

      by Anonymous Coward on Tuesday August 18 2015, @08:36AM (#224308)

      > If lives are to be taken, let's make it a conscious decision of someone willing to live with the consequences

      The more we take the human out of the loop, the lower the threshold for killing becomes. Full automation just makes it easier to shrug off moral responsibility for slaughtering people.

      • (Score: 2, Insightful) by NezSez on Tuesday August 18 2015, @05:31PM

        by NezSez (961) on Tuesday August 18 2015, @05:31PM (#224499) Journal

        Stanislov Petrov, hero of humanity and reason:
        https://en.wikipedia.org/wiki/Stanislav_Petrov [wikipedia.org]

        One of several incidents of a single human, at great risk to his personal life, career and those of his family and friends, intervening in a military process thereby preventing a thermonuclear war between USA and Russia in September 26, 1983. He is still alive FTR.

        He, against reprisals and punishment for disobedience and fear of starting a war etc, prevented an retaliatory nuclear missile launch that would have based on erroneous reports of automated systems which detected missile launches (5 of them) from USA territories and aimed at Soviet Russian territory. It was later determined that the systems mistakenly detected reflected sunlight off clouds as missiles (automated process), and military protocol left the decision to actually fire with the operators on duty, subject to standard military procedures (automation of a different kind). He doubted the detection systems findings (for various reasons) and prevented his military compatriots from automatically responding with their own launches, which was automated military procedure/protocol.

        My point is that any type of "automation" may be dangerous, or beneficial from any given perspective regardless of it involving machines or humans (which can be viewed as biological machines). Automation is reductionism (reducing processing requirements, or energy wasted by movement, etc etc) and this tries to discard unneeded information. The flexibility, robustness, of the "control systems" (as in the mathematical control theory sense, see https://en.wikipedia.org/wiki/Control_theory [wikipedia.org], where "feedback" is important), whether machine or biological, is the critical factor, and sometimes how that system deals with information that may be discarded during the process of simplification altering feedback loops, which are important to the system's future behavoir.

        A good, comprehensive, robust, precise, quick, and accurate control system (say like Ian M. Bank's A.I. "Minds", or Data from Stark Trek) can be as good as or better than a bad biological system (think Inspector Clouseau); OTOH an inaccurate, slow, non-comprehensive (i.e. without good "coverage" of significant variables/terms in the given domain problem) sucks regardless of the medium it is implemented in (machined parts vs. organic natural biological parts)... think of any overly large or complex bureaucratic organization (which has errors with both it's biological and machine elements).

        Automation for some problems is just plain tough no matter how the solution is implemented, and there will always be such troublesome problems because the core issues are at a fundamental level of the universe(s) as we currently know it.

        Interesting questions:

        Would you elect IBM's Watson, after it had been trained/learned from all recorded data of human history and politics, over Donald Trump or Hillary Clinton (or any other human political aspirant) as Potus?

        Who would be best at integrating feedback in future decisions?

        Could Trump trump a positronic brain?

        --
        No Sig to see here, move along, move along...
    • (Score: 2) by Anne Nonymous on Tuesday August 18 2015, @02:27PM

      by Anne Nonymous (712) on Tuesday August 18 2015, @02:27PM (#224420)

      > If lives are to be taken, let's make it a conscious decision of someone willing to live with the consequences

      Or we could just blame the IT department.

  • (Score: 5, Insightful) by pkrasimirov on Tuesday August 18 2015, @07:15AM

    by pkrasimirov (3358) Subscriber Badge on Tuesday August 18 2015, @07:15AM (#224285)

    > Robot Weapons: What’s the Harm?
    People. People are harmed.

    • (Score: 0, Touché) by Anonymous Coward on Tuesday August 18 2015, @07:34AM

      by Anonymous Coward on Tuesday August 18 2015, @07:34AM (#224291)

      So no property damage then? Sounds like a win.

      People are replaceable. Have you seen how often they fuck?

    • (Score: 2) by FatPhil on Tuesday August 18 2015, @07:43AM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Tuesday August 18 2015, @07:43AM (#224293) Homepage
      Street gangs, what's the harm? people, people are harmed. yup, but most of the people who are harmed are in street gangs. Self-solving problem.

      If these things are going to malfunction, they'll malfunction most often where they are most often found, which is while the "good" guys are working on them. Self-solving problem.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 4, Insightful) by pkrasimirov on Tuesday August 18 2015, @08:26AM

        by pkrasimirov (3358) Subscriber Badge on Tuesday August 18 2015, @08:26AM (#224305)

        Street gangs won't pull the trigger to a baby. There is less empathy in Africa but still there is always some.

        Drones are killing machines. Literally. Targets are discriminated, true, but everybody is a target, just with different priority. If you are in sight you are on the list.

      • (Score: 0) by Anonymous Coward on Tuesday August 18 2015, @08:29AM

        by Anonymous Coward on Tuesday August 18 2015, @08:29AM (#224307)

        > Street gangs, what's the harm? people, people are harmed. yup, but most of the people who are harmed are in street gangs. Self-solving problem.

        Which is why there are no street gangs any more, the problem totally solved itself.

        > If these things are going to malfunction,

        No serious objection is about malfunctions. The problem with these system is that they will function exactly as designed.

  • (Score: 3, Funny) by c0lo on Tuesday August 18 2015, @08:00AM

    by c0lo (156) Subscriber Badge on Tuesday August 18 2015, @08:00AM (#224297) Journal
    Original

    what it thinks is is mission because of an error.

    Just because you spell ISIS in a peculiar mode, it won't stop the smart bomb to carry what it thinks as its mission.
    (and will stop neither NSA or ASIO to look closer to this message in the search for a hidden meaning; chilax guys, it's really just a pun-y way to signal a typo).

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 3, Touché) by Bill Evans on Tuesday August 18 2015, @09:34AM

    by Bill Evans (1094) on Tuesday August 18 2015, @09:34AM (#224318) Homepage

    A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants

    Remember this?

    We may automatically check your version of the software and download software updates or configuration changes, including those that prevent you from accessing the Services, playing counterfeit games, or using unauthorized hardware peripheral devices.

  • (Score: 2) by Hairyfeet on Tuesday August 18 2015, @09:58AM

    by Hairyfeet (75) <reversethis-{moc ... {8691tsaebssab}> on Tuesday August 18 2015, @09:58AM (#224323) Journal

    Where they had a killbot with IIRC a 50 cal mounted on the thing and it freaked out and started spraying towards the audience? I would say THAT is the risk, as a human operator may fuck up and make a bad call but they are not gonna have a circuit fry and suddenly decide to go ED209 [youtube.com] on your ass.

    --
    ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
    • (Score: 1, Insightful) by Anonymous Coward on Tuesday August 18 2015, @10:14AM

      by Anonymous Coward on Tuesday August 18 2015, @10:14AM (#224329)

      That is a one-off risk. Makes for great video, but one-offs aren't a serious problem.

      The serious problem here is the removal of human responsibility from the equation. Automating death makes it so much easier to kill people under questionable circumstances. Not by technical malfunction -- on purpose. We put a lot of effort into dehumanizing the enemy so troops can more easily kill the enemy without any moral qualms. AI weapons don't care about the humanity of the people they kill, not one iota. They don't second guess. They don't question whether an order is illegal. They don't care if they are defending or attacking the constitution. They just kill.

      • (Score: 3, Insightful) by Thexalon on Tuesday August 18 2015, @01:07PM

        by Thexalon (636) on Tuesday August 18 2015, @01:07PM (#224391)

        That is a one-off risk. Makes for great video, but one-offs aren't a serious problem.

        Something tells me you wouldn't think that if you were Mr Kinney or his family and friends.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
        • (Score: 0) by Anonymous Coward on Tuesday August 18 2015, @05:54PM

          by Anonymous Coward on Tuesday August 18 2015, @05:54PM (#224508)

          > Something tells me you wouldn't think that if you were Mr Kinney or his family and friends.

          No more so than the family and friends of the 30,000 people who die in auto accidents every year. I don't see you sticking up for any of them.

  • (Score: 2) by chewbacon on Tuesday August 18 2015, @10:56AM

    by chewbacon (1032) on Tuesday August 18 2015, @10:56AM (#224345)

    With all the articles we read here or at the green site about programming bugs and shitty programmers, do you really have to ask? When I think about robots discriminating between bad guys and innocent bystanders I can't help but think about Hotmail's spam filter from 10 years ago that was seemingly coded backwards as I spent a lot of time marking spam as such and finding legit mail in my spam box. Yeah, that's the harm.

  • (Score: 0) by Anonymous Coward on Tuesday August 18 2015, @02:02PM

    by Anonymous Coward on Tuesday August 18 2015, @02:02PM (#224411)

    Having read the article....

    They make a *very* good point. Autonomous weapons will come from pedestrian sorts of things. It is not a big stretch to turn a self driving car into a self driving bomb. The first set of autonomous weapons like this will come out of things everyone wants. The next leap after that will be the AI deciding it needs to use a weapon all by itself. But 'gen 1' will be human initiated.

    • (Score: 2) by Freeman on Tuesday August 18 2015, @05:21PM

      by Freeman (732) on Tuesday August 18 2015, @05:21PM (#224493) Journal

      What's more likely? Using a remote control car ($500 tops) to deliver a bomb or using a self driving car ($10,000 minimum and possibly more) to deliver a bomb?

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 0) by Anonymous Coward on Wednesday August 19 2015, @02:53PM

      by Anonymous Coward on Wednesday August 19 2015, @02:53PM (#225016)
      Don't forget, this is Soylent "Pollyanna" News we're talking about. Where autonomous weapons systems making life & death decisions for us is BAD, but autonomous vehicles making life & death decisions for us is GOOD. Get with the groupthink!
  • (Score: 2) by tangomargarine on Tuesday August 18 2015, @02:21PM

    by tangomargarine (667) on Tuesday August 18 2015, @02:21PM (#224418)

    My opposition to drones is more philosophical than pragmatic.

    Using drones instead of F-22s or other manned aircraft is presumably a hell of a lot cheaper, and you don't put the pilot in danger--which is a hidden problem. One of the biggest reasons people oppose specific wars is when they're being asked to put their lives on the line to fight somebody they don't really care about.

    Wasn't that Vietnam, basically? On the one side you had the determined Communist north, with all their entrenched guerilla networks and hiding in the jungle. On the other side, you had the southern government, which was riddled with corruption and not really able to defend themselves militarily. And all this halfway around the world, in former French Indochina. I'm not aware that the country itself had any real significance per se, it was just the "domino" doctrine that the U.S. didn't want any one country to fall to communism. So as the war dragged on the protests got worse and worse.

    If we no longer send our young men (and women) into combat, they and their families lose part of the reason to oppose the war. The military hardware companies, on the other hand, are more than happy to crank out as many drones as the generals can want.

    And finally, in the same vein it becomes easier to start new wars because "we're not risking anybody's lives"...well, other than the people who are being bombed by a neverending stream of robots.

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 0) by Anonymous Coward on Tuesday August 18 2015, @05:50PM

      by Anonymous Coward on Tuesday August 18 2015, @05:50PM (#224506)

      > If we no longer send our young men (and women) into combat, they and their families lose part of the reason to oppose the war.

      That is why they eliminated the draft. It used to be that every mother in the country had something to lose. Now it only a small fraction of mothers and that fraction is primarily made up of those with the least political agency in our society. Drones are just a small step compared to that change.

  • (Score: -1, Troll) by Anonymous Coward on Tuesday August 18 2015, @04:43PM

    by Anonymous Coward on Tuesday August 18 2015, @04:43PM (#224476)

    Personally, I dislike the idea of using AI in weapons to make targeting decisions.

    Actually, I'm fine with AI weapons making targeting decisions. Humans in the midst of combat are panicky and make rash decisions. Robots with AI capabilities can afford to be more cautious as they don't have to worry about the possibility of death while under attack.

    I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.

    While I take your point about being confronted with a confused AI-enabled robot who thinks your "neutralization" is it's prime mission, that is like trying to catch the train long after it has left the station. The time to make that argument is long before you are confronted by a bloodthirsty AI-enabled robot. In other words, the best way to avoid becoming collateral damage in a war is to stop that war from happening in the first place. Barring that, if you should find yourself in the middle of a war zone, you should avoid engaging in activities which may cause an AI-enabled robot to confuse you with a legitimate target. Yeah, I know. Easier said than done, but that is the raw truth of the matter.

  • (Score: 0) by Anonymous Coward on Tuesday August 18 2015, @11:13PM

    by Anonymous Coward on Tuesday August 18 2015, @11:13PM (#224654)

    This has been written about in The Butlerian Jihad, i believe...

    Personally, im all for automated armies.

    Think about it.

    Where theres automated war, theres automated revolution as well ^_^

    These people, as a culture, truly want to get killed by their creations... _Why_ should anyone stop them?

  • (Score: 1) by evilcam on Wednesday August 19 2015, @03:23AM

    by evilcam (3239) Subscriber Badge on Wednesday August 19 2015, @03:23AM (#224766)

    "We've had to endure much, you and I, but soon there will be order again, a new age. Aquinas spoke of the mythical City on the Hill. Soon that city will be a reality, and we will be crowned its kings. Or better than kings. Gods."

    - Deus Ex (2000)