Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Wednesday October 12 2016, @05:43AM   Printer-friendly
from the no-more-heroes dept.

The technology is new, but the moral conundrum isn't: A self-driving car identifies a group of children running into the road. There is no time to stop. To swerve around them would drive the car into a speeding truck on one side or over a cliff on the other, bringing certain death to anybody inside.

To anyone pushing for a future for autonomous cars, this question has become the elephant in the room, argued over incessantly by lawyers, regulators, and ethicists; it has even been at the center of a human study by Science. Happy to have their names kept in the background of the life-or-death drama, most carmakers have let Google take the lead while making passing reference to ongoing research, investigations, or discussions.

But not Mercedes-Benz. Not anymore.

The world's oldest car maker no longer sees the problem, similar to the question from 1967 known as the Trolley Problem, as unanswerable. Rather than tying itself into moral and ethical knots in a crisis, Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.

Is it really a decision based on morality, or because choosing to save the pedestrians is much harder to code?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @01:43PM

    by Anonymous Coward on Wednesday October 12 2016, @01:43PM (#413426)

    The framing of the problem also ignores the real possibility of the car misidentifying something in the road as a pedestrian. If the conditions as such that there isn't enough time to avoid the accident, then there may not be enough time to properly identify what is in the road.

    Should self-driving cars cause kill the occupants when it encounters a false positive? What threshold is acceptable (i.e. 1% chance of killing the occupants to avoid what turns out to be a dog) and who is responsible?

  • (Score: 2) by theluggage on Thursday October 13 2016, @12:28PM

    by theluggage (1797) on Thursday October 13 2016, @12:28PM (#413856)

    and who is responsible?

    ...the maker of the car, unless the "driver" was interfering in some way. If its a hands-off self-driving car (as opposed to today's "cruise control plus" then that's the only way it could work. Thats not to say that you, the owner, aren't indirectly paying some sort of insurance premium either as part of the purchase price or via a lease/licensing fee, but the policyholder - the risk - the one who's track record affects the premiums - has to be the manufacturer. Any other solution shouldn't be touched with a bargepole: you'd be mad to take responsibility over something you couldn't control.

    NB: I suspect we'll also see the end of car "ownership" - at the very least they will be leased. Its already getting to be the case that the only financially sensible way of getting an EV, let alone an autonomous one, is by leasing, even if you have the cash (I don't like that but suspect it will be the case...)