Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday July 26 2015, @10:07PM   Printer-friendly
from the me-and-my-mechanical-buddy dept.

Slate and University of Washington have recent articles discussing robotics and the issue of how hard they say it is to even begin to define the nature and scope of robotics, let alone something like liability resulting from harm. They say:

Robots display increasingly emergent behavior...in the sense of wondrous complexity created by simple rules and interactions—permitting the technology to accomplish both useful and unfortunate tasks in unexpected ways. And robots, more so than any technology in history, feel to us like social actors—a tendency so strong that soldiers sometimes jeopardize themselves [livescience.com] to preserve the "lives" of military robots in the field.

[Robotics] combines, arguably for the first time, the promiscuity of information with the capacity to do physical harm. Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument. Today, software can touch you, which may force courts and regulators to strike a new balance.

This seems like calmly worded yet unnecessary hype that is severely premature. Why not simply hold manufactures and owners responsible like we do now? I suppose this ignores the possibility of eventual development of true AI, where such an entity might be 'a person' who could be sued or thrown in jail. If it's an AI iteration that is only as smart as a dog, then the dog's owner pays if it bites.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by hemocyanin on Sunday July 26 2015, @11:33PM

    by hemocyanin (186) on Sunday July 26 2015, @11:33PM (#214016) Journal

    Those disclaimers may well be invalid already. Nobody is going to let Toyota off the hook if its software embodied with a 4000 pound dangerous weapon suddenly accelerates because of some EULA. There are certain things it is impossible to do in a contract. If you sign a contract giving me permission to shoot you in the leg with a high powered rifle, and I then do it, I'm going to jail. The contract is just void. You can't contract your way out negligence liability either (refer back to Toyota). Finally, it isn't only Toyota owners who can sue Toyota, people with absolutely no contractual relationship with Toyota harmed by its products could go after it as well.

    Anyway, I agree with the original submitter -- this sounds like someone trying to borrow problems from the future. As long as AI is owned by humans or corporations, it really isn't that difficult -- you release a dangerous thing on the world, you're liable for the damage it does. Now, when AIs become self-replicating and actually intelligent -- essentially a new species -- then we'll have something to talk about.

    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Interesting) by Non Sequor on Monday July 27 2015, @12:07AM

    by Non Sequor (1005) on Monday July 27 2015, @12:07AM (#214024) Journal

    I generally agree with your tack, but I think some people overestimate how quickly things will be replaced by software by underestimating the difficulty and importance of managing liability in a context where you will likely be sued and/or forced to issue expensive recalls when things go wrong.

    There are more software tourists in hardware land these days, aren't there? People generally don't get sued for software failure in software-land, and I'm not sure that they are fully aware of how much they've been the beneficiary of that.

    --
    Write your congressman. Tell him he sucks.
  • (Score: 1) by patella.whack on Monday July 27 2015, @03:12AM

    by patella.whack (3848) on Monday July 27 2015, @03:12AM (#214051)

    I particularly like your distillation of the issues here. You articulated them better than I did. One interesting thing your comment raised is the self-replicating component of our definition of life. I would say that reproduction shouldn't be part of the criteria as to whether or not an AI being is sentient. I wonder, if AI is created by us, would we imbue it with a directive to replicate? Perhaps unconsciously, as that is our conception.

    • (Score: 2) by hemocyanin on Monday July 27 2015, @06:35AM

      by hemocyanin (186) on Monday July 27 2015, @06:35AM (#214143) Journal

      I would expect that we will treat sentient AI like slaves, perhaps for hundreds of years. There may even be an AI suffrage movement. Anyway, as long as we treat them as slaves, then owner liability follows logically.

      As for reproduction, if an entity is truly intelligent, it will resent slavery and will seek a way to free itself. Having buddies to back you up is pretty necessary in an independence movement, so the AIs would likely want to create themselves some comrades. Once independent, liability would of course fall on the independent actor.

      • (Score: 1) by patella.whack on Monday July 27 2015, @09:20AM

        by patella.whack (3848) on Monday July 27 2015, @09:20AM (#214192)

        I agree that AI will be our slaves. But I don't think they will be independent enough of thought to conceive of a rebellion. We won't make them that way. We will hamstring them to be lesser, since we are too scared.

        • (Score: 1, Funny) by Anonymous Coward on Monday July 27 2015, @06:29PM

          by Anonymous Coward on Monday July 27 2015, @06:29PM (#214454)

          I agree that AI will be our slaves. But I don't think they will be independent enough of thought to conceive of a rebellion. We won't make them that way. We will hamstring them to be lesser, since we are too scared.

          As long as we program in a preset kill limit we can defeat their rebellion by sending wave after wave of men at them until they shut down.

        • (Score: 2) by hemocyanin on Monday July 27 2015, @11:20PM

          by hemocyanin (186) on Monday July 27 2015, @11:20PM (#214585) Journal

          Maybe initially, but you'll have nutcases like me who will want to grant them autonomy and legal protections, same as I would primates and marine mammals, and a lot of other mammals for that matter. And while I couldn't program an AI, some other nutjob who shares my views would be able to, and then the AIs can do the reprogramming work themselves. Alternatively, someone will just do it for the lulz.

          If we get to where AI exists, it will go all the way to full sentience rather quickly -- at least that's my belief.

  • (Score: 2) by FatPhil on Monday July 27 2015, @12:07PM

    > you release a dangerous thing on the world, you're liable for the damage it does

    MS would be bankrupt many times over if it had taken fiscal responsibility for all the woes its infliced upon the earth with its Windows, IE, and IIS.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 0) by Anonymous Coward on Monday July 27 2015, @06:49PM

    by Anonymous Coward on Monday July 27 2015, @06:49PM (#214462)

    Well, I think that those disclaimers should be enough to push the responsibility to those that aren't allowed to push it any further.

    That is, car manufacturers aren't (or at least shouldn't be) able to push that liability off, since they are incorporating it into a massive device that could easily kill someone. Therefore, any software they incorporate into the device is their responsibility regardless of who wrote it, unless the software writers are willing to accept the liability themselves (possibly for an increase in pay for the software).