The technology is new, but the moral conundrum isn't: A self-driving car identifies a group of children running into the road. There is no time to stop. To swerve around them would drive the car into a speeding truck on one side or over a cliff on the other, bringing certain death to anybody inside.
To anyone pushing for a future for autonomous cars, this question has become the elephant in the room, argued over incessantly by lawyers, regulators, and ethicists; it has even been at the center of a human study by Science. Happy to have their names kept in the background of the life-or-death drama, most carmakers have let Google take the lead while making passing reference to ongoing research, investigations, or discussions.
But not Mercedes-Benz. Not anymore.
The world's oldest car maker no longer sees the problem, similar to the question from 1967 known as the Trolley Problem, as unanswerable. Rather than tying itself into moral and ethical knots in a crisis, Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.
Is it really a decision based on morality, or because choosing to save the pedestrians is much harder to code?
(Score: 2, Interesting) by tftp on Thursday October 13 2016, @05:41AM
There is another direction of thought. The computer is in control of the car and of the passenger. If the car does something, it will happen. It becomes deterministic. However the behavior of the other party (such as a pedestrian) is not deterministic. The car cannot be sure that if it swerves left and hits a tree, the pedestrian will not run forward (or back) and get hit anyway. The probabity of everyone dying is not zero. However if the car ignores the unpredictable pedestrian and always saves the occupant, the probability of both dying is zero, as the passenger survives. (Well, give or take.) For that reason it is statistically advantageous to optimize saving the occupant - the car has better chances to succeed.