El Reg reports
The family of a repair technician killed in an auto parts factory accident is suing five robotics companies they say are responsible.
In a suit [PDF] filed to the Western Michigan US District Court this week, the family of Wanda Holbrook claims that the companies that built, installed, and maintained the robotics at a trailer hitch assembly plant should be held liable for her fatal accident at the plant in 2015.
According to the lawsuit, Holbrook, a journeyman technician, was performing routine maintenance on one of the robots on the trailer hitch assembly line when the unit unexpectedly activated and attempted to load a part into the unit being repaired, crushing Holbrook's head.
Now Holbrook's estate is suing the three companies that built the robots (Fanuc America, Nachi Robotic, and Lincoln Electric) for failing to design adequate safeguards and protections into the robots. They're also suing two other companies that installed and maintained the unit (Flex-N-Gate, Prodomax) for failing to prevent an accident they say would have been avoided had safety been a higher priority.
(Score: 4, Insightful) by gidds on Tuesday March 14 2017, @01:52PM (2 children)
"1. A robot may not injure a human being or, through inaction, allow a human being to come to harm."
(Can't believe no-one's posted that yet...)
In the years since Asimov wrote that, real-life robots have been created — mostly industrial ones like in this story. But I'm not aware of any serious attempt to include such Laws of Robotics.
The main reasons have been lack of need, and lack of ability:
• No need, because robots are not for use by the general public: they're restricted to certain areas in factories and suchlike, where access is protected and any people nearby will be properly trained. — This story gives a strong counterexample to that!
• And no ability, because our robots don't yet 'think' in the way that Asimov envisioned, and aren't capable of sensing, understanding, reasoning, and acting in a way that could fulfil the Laws. — But AI is improving all the time, and if we can create autonomous cars that can drive safely, then surely we do have the ability!
So the excuses are running out.
Will those Laws (or something roughly equivalent) start to find a place in our robots (industrial or otherwise)? Will they be enforced by legislation (and, if so, how could they be tested)?
Or, as automation spreads further and further, will we see more and more stories like this?
[sig redacted]
(Score: 2) by Aiwendil on Tuesday March 14 2017, @03:25PM (1 child)
No, and it is a good thing.
The laws only work for human-esque robots.
To take an example: we know it is harmful to sit still for too long, we know that tv and computers (and electrical lighting) causes us to sit still for too long - does this means that the automated parts of the electricity grid (same tech and logic as in industrial robots) should power doen?
Also - does this mean that escalators should stop working? Cars refusing to drive less than a few miles?
And then we need to define harm (most exercise causes minor tesrs of muscles and minor fractures in the bones - hpwever it normally heals to a stronger state - so, should it try to prevent short term harm?)
And my favorite - should ghe computer kill the two schucks who violates laws/procedures or the one that follows laws/procedures? (The runaway train thought-"problem")
Or something that might strike closer to home for some - most beer is industrially brewed by robots...
(Score: 0) by Anonymous Coward on Tuesday March 14 2017, @07:41PM
Also - does this mean that escalators should stop working? Cars refusing to drive less than a few miles?
Maybe the AI can allow transportation this way for disabled or elderly people, but may I fantasize about the AI doing exactly this for everybody else who is able bodied??
I am now fantasizing. The world is quieter, the air is cleaner, the pace of life is slower, and it's beautiful.