Stories
Slash Boxes
Comments

SoylentNews is people

posted by Woods on Tuesday May 13 2014, @11:49PM   Printer-friendly
from the ethical-quandaries dept.

It happens quickly-more quickly than you, being human, can fully process. A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there's too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall. Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn't be simpler. This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University. In a recent opinion piece for Wired, Lin explored one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by Ja'Achan on Wednesday May 14 2014, @08:13AM

    by Ja'Achan (2318) on Wednesday May 14 2014, @08:13AM (#43092) Homepage

    This comment is why the whole debate is pointless. Who would install an AI or buy a car with one that goes "oh, I'll sacrifice your live for others if necessary"? People already buy overly strong car because of a "rather them than me" attitude.