Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday July 26 2015, @10:07PM   Printer-friendly
from the me-and-my-mechanical-buddy dept.

Slate and University of Washington have recent articles discussing robotics and the issue of how hard they say it is to even begin to define the nature and scope of robotics, let alone something like liability resulting from harm. They say:

Robots display increasingly emergent behavior...in the sense of wondrous complexity created by simple rules and interactions—permitting the technology to accomplish both useful and unfortunate tasks in unexpected ways. And robots, more so than any technology in history, feel to us like social actors—a tendency so strong that soldiers sometimes jeopardize themselves [livescience.com] to preserve the "lives" of military robots in the field.

[Robotics] combines, arguably for the first time, the promiscuity of information with the capacity to do physical harm. Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument. Today, software can touch you, which may force courts and regulators to strike a new balance.

This seems like calmly worded yet unnecessary hype that is severely premature. Why not simply hold manufactures and owners responsible like we do now? I suppose this ignores the possibility of eventual development of true AI, where such an entity might be 'a person' who could be sued or thrown in jail. If it's an AI iteration that is only as smart as a dog, then the dog's owner pays if it bites.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by patella.whack on Monday July 27 2015, @03:12AM

    by patella.whack (3848) on Monday July 27 2015, @03:12AM (#214051)

    I particularly like your distillation of the issues here. You articulated them better than I did. One interesting thing your comment raised is the self-replicating component of our definition of life. I would say that reproduction shouldn't be part of the criteria as to whether or not an AI being is sentient. I wonder, if AI is created by us, would we imbue it with a directive to replicate? Perhaps unconsciously, as that is our conception.

  • (Score: 2) by hemocyanin on Monday July 27 2015, @06:35AM

    by hemocyanin (186) on Monday July 27 2015, @06:35AM (#214143) Journal

    I would expect that we will treat sentient AI like slaves, perhaps for hundreds of years. There may even be an AI suffrage movement. Anyway, as long as we treat them as slaves, then owner liability follows logically.

    As for reproduction, if an entity is truly intelligent, it will resent slavery and will seek a way to free itself. Having buddies to back you up is pretty necessary in an independence movement, so the AIs would likely want to create themselves some comrades. Once independent, liability would of course fall on the independent actor.

    • (Score: 1) by patella.whack on Monday July 27 2015, @09:20AM

      by patella.whack (3848) on Monday July 27 2015, @09:20AM (#214192)

      I agree that AI will be our slaves. But I don't think they will be independent enough of thought to conceive of a rebellion. We won't make them that way. We will hamstring them to be lesser, since we are too scared.

      • (Score: 1, Funny) by Anonymous Coward on Monday July 27 2015, @06:29PM

        by Anonymous Coward on Monday July 27 2015, @06:29PM (#214454)

        I agree that AI will be our slaves. But I don't think they will be independent enough of thought to conceive of a rebellion. We won't make them that way. We will hamstring them to be lesser, since we are too scared.

        As long as we program in a preset kill limit we can defeat their rebellion by sending wave after wave of men at them until they shut down.

      • (Score: 2) by hemocyanin on Monday July 27 2015, @11:20PM

        by hemocyanin (186) on Monday July 27 2015, @11:20PM (#214585) Journal

        Maybe initially, but you'll have nutcases like me who will want to grant them autonomy and legal protections, same as I would primates and marine mammals, and a lot of other mammals for that matter. And while I couldn't program an AI, some other nutjob who shares my views would be able to, and then the AIs can do the reprogramming work themselves. Alternatively, someone will just do it for the lulz.

        If we get to where AI exists, it will go all the way to full sentience rather quickly -- at least that's my belief.