Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday July 26 2015, @10:07PM   Printer-friendly
from the me-and-my-mechanical-buddy dept.

Slate and University of Washington have recent articles discussing robotics and the issue of how hard they say it is to even begin to define the nature and scope of robotics, let alone something like liability resulting from harm. They say:

Robots display increasingly emergent behavior...in the sense of wondrous complexity created by simple rules and interactions—permitting the technology to accomplish both useful and unfortunate tasks in unexpected ways. And robots, more so than any technology in history, feel to us like social actors—a tendency so strong that soldiers sometimes jeopardize themselves [livescience.com] to preserve the "lives" of military robots in the field.

[Robotics] combines, arguably for the first time, the promiscuity of information with the capacity to do physical harm. Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument. Today, software can touch you, which may force courts and regulators to strike a new balance.

This seems like calmly worded yet unnecessary hype that is severely premature. Why not simply hold manufactures and owners responsible like we do now? I suppose this ignores the possibility of eventual development of true AI, where such an entity might be 'a person' who could be sued or thrown in jail. If it's an AI iteration that is only as smart as a dog, then the dog's owner pays if it bites.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by Non Sequor on Sunday July 26 2015, @11:09PM

    by Non Sequor (1005) on Sunday July 26 2015, @11:09PM (#214009) Journal

    Virtually all software is distributed with a disclaimer denying any warranty or responsibility for outcomes achieved with the software. What you're saying is, if you have pieces of software that affect a lot of people's lives in a way that they rely on, those disclaimers probably need to be invalidated.

    Software writers are less ready for that transition than the software itself.

    --
    Write your congressman. Tell him he sucks.
    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 5, Interesting) by hemocyanin on Sunday July 26 2015, @11:33PM

    by hemocyanin (186) on Sunday July 26 2015, @11:33PM (#214016) Journal

    Those disclaimers may well be invalid already. Nobody is going to let Toyota off the hook if its software embodied with a 4000 pound dangerous weapon suddenly accelerates because of some EULA. There are certain things it is impossible to do in a contract. If you sign a contract giving me permission to shoot you in the leg with a high powered rifle, and I then do it, I'm going to jail. The contract is just void. You can't contract your way out negligence liability either (refer back to Toyota). Finally, it isn't only Toyota owners who can sue Toyota, people with absolutely no contractual relationship with Toyota harmed by its products could go after it as well.

    Anyway, I agree with the original submitter -- this sounds like someone trying to borrow problems from the future. As long as AI is owned by humans or corporations, it really isn't that difficult -- you release a dangerous thing on the world, you're liable for the damage it does. Now, when AIs become self-replicating and actually intelligent -- essentially a new species -- then we'll have something to talk about.

    • (Score: 5, Interesting) by Non Sequor on Monday July 27 2015, @12:07AM

      by Non Sequor (1005) on Monday July 27 2015, @12:07AM (#214024) Journal

      I generally agree with your tack, but I think some people overestimate how quickly things will be replaced by software by underestimating the difficulty and importance of managing liability in a context where you will likely be sued and/or forced to issue expensive recalls when things go wrong.

      There are more software tourists in hardware land these days, aren't there? People generally don't get sued for software failure in software-land, and I'm not sure that they are fully aware of how much they've been the beneficiary of that.

      --
      Write your congressman. Tell him he sucks.
    • (Score: 1) by patella.whack on Monday July 27 2015, @03:12AM

      by patella.whack (3848) on Monday July 27 2015, @03:12AM (#214051)

      I particularly like your distillation of the issues here. You articulated them better than I did. One interesting thing your comment raised is the self-replicating component of our definition of life. I would say that reproduction shouldn't be part of the criteria as to whether or not an AI being is sentient. I wonder, if AI is created by us, would we imbue it with a directive to replicate? Perhaps unconsciously, as that is our conception.

      • (Score: 2) by hemocyanin on Monday July 27 2015, @06:35AM

        by hemocyanin (186) on Monday July 27 2015, @06:35AM (#214143) Journal

        I would expect that we will treat sentient AI like slaves, perhaps for hundreds of years. There may even be an AI suffrage movement. Anyway, as long as we treat them as slaves, then owner liability follows logically.

        As for reproduction, if an entity is truly intelligent, it will resent slavery and will seek a way to free itself. Having buddies to back you up is pretty necessary in an independence movement, so the AIs would likely want to create themselves some comrades. Once independent, liability would of course fall on the independent actor.

        • (Score: 1) by patella.whack on Monday July 27 2015, @09:20AM

          by patella.whack (3848) on Monday July 27 2015, @09:20AM (#214192)

          I agree that AI will be our slaves. But I don't think they will be independent enough of thought to conceive of a rebellion. We won't make them that way. We will hamstring them to be lesser, since we are too scared.

          • (Score: 1, Funny) by Anonymous Coward on Monday July 27 2015, @06:29PM

            by Anonymous Coward on Monday July 27 2015, @06:29PM (#214454)

            I agree that AI will be our slaves. But I don't think they will be independent enough of thought to conceive of a rebellion. We won't make them that way. We will hamstring them to be lesser, since we are too scared.

            As long as we program in a preset kill limit we can defeat their rebellion by sending wave after wave of men at them until they shut down.

          • (Score: 2) by hemocyanin on Monday July 27 2015, @11:20PM

            by hemocyanin (186) on Monday July 27 2015, @11:20PM (#214585) Journal

            Maybe initially, but you'll have nutcases like me who will want to grant them autonomy and legal protections, same as I would primates and marine mammals, and a lot of other mammals for that matter. And while I couldn't program an AI, some other nutjob who shares my views would be able to, and then the AIs can do the reprogramming work themselves. Alternatively, someone will just do it for the lulz.

            If we get to where AI exists, it will go all the way to full sentience rather quickly -- at least that's my belief.

    • (Score: 2) by FatPhil on Monday July 27 2015, @12:07PM

      > you release a dangerous thing on the world, you're liable for the damage it does

      MS would be bankrupt many times over if it had taken fiscal responsibility for all the woes its infliced upon the earth with its Windows, IE, and IIS.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 0) by Anonymous Coward on Monday July 27 2015, @06:49PM

      by Anonymous Coward on Monday July 27 2015, @06:49PM (#214462)

      Well, I think that those disclaimers should be enough to push the responsibility to those that aren't allowed to push it any further.

      That is, car manufacturers aren't (or at least shouldn't be) able to push that liability off, since they are incorporating it into a massive device that could easily kill someone. Therefore, any software they incorporate into the device is their responsibility regardless of who wrote it, unless the software writers are willing to accept the liability themselves (possibly for an increase in pay for the software).

  • (Score: 3, Informative) by c0lo on Monday July 27 2015, @12:30AM

    by c0lo (156) Subscriber Badge on Monday July 27 2015, @12:30AM (#214032) Journal

    Virtually all software is distributed with a disclaimer denying any warranty or responsibility for outcomes achieved with the software.

    "Virtually all" != "all" - see life-critical system [wikipedia.org].
    For the heck of it, standards for automotive exist [wikipedia.org] - except that:

    These Severity, Exposure, and Control definitions are informative, not prescriptive, and effectively leave some room for subjective variation or discretion between various automakers and component suppliers. In response, the Society for Automotive Safety Engineers (SAE) is drafting "J2980 – Considerations for ISO26262 ASIL Hazard Classification" to provide more explicit guidance for assessing Exposure, Severity and Controllability for a given hazard

    So, work in progress, eh? Guess what: the first publication of J2980 is... 2015-05-07 [sae.org] and it costs $72 bucks just to look at it.
    How good are those "considerations" in practice? What's the cost of adjusting the car production processes to effect those consideration in practice? Too early to tell, so... should we just go ahead and throw self-driving cars on the road no matter what? (Is it like the progress must not be stopped by such a trifle as expensive public safety considerations?)
    Of course, given the issuing date, there's no agency tasked to enforce anything right now.

    Want more? What about self-driving trucks... You know? the ones that "are imminent" [medium.com]? What about them, you ask? The J2980 is limited [sae.org]:

    The technical focus of this document is on vehicle motion control systems. It is limited to passenger cars weighing up to 3.5 metric tons. Furthermore, the scope of this recommended practice is limited to collision-related hazards.

    In other words, I'm not convinced that it actually covers sudden acceleration/power loss for any vehicle (those aren't related to collision per se, even if they can lead to a collision - but I'm not inclined to fork $72 just from curiosity) and for sure does not cover now anything heavier that 3.5 tonnes in any circumstances.
    Let me put it in other words: the one and only warranty of not being hit by an "imminent self-driving truck" is informative, not prescriptive.

    (fortunately, the imminence of self-driving trucks may be greatly exaggerated... for now. But, given how much truck drivers cost the economy, I wouldn't bet it will take long enough; on the contrary, I'd bet it will start happening before the SAE standard committee has something/anything addressing "truck automotive safety" no matter if human or self-driving. It would not be a "standard committee" if producing a standard overnight).

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford