Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Tuesday August 18 2015, @06:23AM   Printer-friendly
from the skynet-is-beginning dept.

Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by NezSez on Tuesday August 18 2015, @05:31PM

    by NezSez (961) on Tuesday August 18 2015, @05:31PM (#224499) Journal

    Stanislov Petrov, hero of humanity and reason:
    https://en.wikipedia.org/wiki/Stanislav_Petrov [wikipedia.org]

    One of several incidents of a single human, at great risk to his personal life, career and those of his family and friends, intervening in a military process thereby preventing a thermonuclear war between USA and Russia in September 26, 1983. He is still alive FTR.

    He, against reprisals and punishment for disobedience and fear of starting a war etc, prevented an retaliatory nuclear missile launch that would have based on erroneous reports of automated systems which detected missile launches (5 of them) from USA territories and aimed at Soviet Russian territory. It was later determined that the systems mistakenly detected reflected sunlight off clouds as missiles (automated process), and military protocol left the decision to actually fire with the operators on duty, subject to standard military procedures (automation of a different kind). He doubted the detection systems findings (for various reasons) and prevented his military compatriots from automatically responding with their own launches, which was automated military procedure/protocol.

    My point is that any type of "automation" may be dangerous, or beneficial from any given perspective regardless of it involving machines or humans (which can be viewed as biological machines). Automation is reductionism (reducing processing requirements, or energy wasted by movement, etc etc) and this tries to discard unneeded information. The flexibility, robustness, of the "control systems" (as in the mathematical control theory sense, see https://en.wikipedia.org/wiki/Control_theory [wikipedia.org], where "feedback" is important), whether machine or biological, is the critical factor, and sometimes how that system deals with information that may be discarded during the process of simplification altering feedback loops, which are important to the system's future behavoir.

    A good, comprehensive, robust, precise, quick, and accurate control system (say like Ian M. Bank's A.I. "Minds", or Data from Stark Trek) can be as good as or better than a bad biological system (think Inspector Clouseau); OTOH an inaccurate, slow, non-comprehensive (i.e. without good "coverage" of significant variables/terms in the given domain problem) sucks regardless of the medium it is implemented in (machined parts vs. organic natural biological parts)... think of any overly large or complex bureaucratic organization (which has errors with both it's biological and machine elements).

    Automation for some problems is just plain tough no matter how the solution is implemented, and there will always be such troublesome problems because the core issues are at a fundamental level of the universe(s) as we currently know it.

    Interesting questions:

    Would you elect IBM's Watson, after it had been trained/learned from all recorded data of human history and politics, over Donald Trump or Hillary Clinton (or any other human political aspirant) as Potus?

    Who would be best at integrating feedback in future decisions?

    Could Trump trump a positronic brain?

    --
    No Sig to see here, move along, move along...
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   2