Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday October 12 2016, @05:43AM   Printer-friendly
from the no-more-heroes dept.

The technology is new, but the moral conundrum isn't: A self-driving car identifies a group of children running into the road. There is no time to stop. To swerve around them would drive the car into a speeding truck on one side or over a cliff on the other, bringing certain death to anybody inside.

To anyone pushing for a future for autonomous cars, this question has become the elephant in the room, argued over incessantly by lawyers, regulators, and ethicists; it has even been at the center of a human study by Science. Happy to have their names kept in the background of the life-or-death drama, most carmakers have let Google take the lead while making passing reference to ongoing research, investigations, or discussions.

But not Mercedes-Benz. Not anymore.

The world's oldest car maker no longer sees the problem, similar to the question from 1967 known as the Trolley Problem, as unanswerable. Rather than tying itself into moral and ethical knots in a crisis, Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.

Is it really a decision based on morality, or because choosing to save the pedestrians is much harder to code?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by theluggage on Wednesday October 12 2016, @10:50AM

    by theluggage (1797) on Wednesday October 12 2016, @10:50AM (#413386)

    The "trolley problem" is flawed because it assumes that you have oracular knowledge of the consequences of each possible action and only have to contemplate the ethical angle. It has been simplified into meaninglessness. In reality, the problem is not counting the cost of each hypothetical outcome, it is the doubt over the actual outcome of each choice, not to mention the lack of time for philosophical debate. The dangerous people are the ones who are utterly convinced that they are doing the right thing.

    To swerve around them would drive the car into a speeding truck on one side or over a cliff on the other, bringing certain death to anybody inside.

    ...also killing the driver of the truck which jacknifes and wipe out the kids anyway while the wreckage lands on the rail track at the bottom of the cliff and causes a train crash, killing the surgeon who was on the way to save the life of the biologist who was about to cure cancer. Oh, and the one kid who survives grows up to be a genocidal tyrant. OK, that's kind of a worst-case scenario, but the first couple of possibility are quite feasible and hard to predict.

    The real priority of the self-driving car should be to maintain control of the vehicle and stop safely because few situations are made safer by a ton of randomly tumbling steel shedding lithium-ion grenades. Leave the three-laws stuff to science fiction and the "theology for agnostics" to philosophers.

    Meanwhile, the self-driving car is unlikely to be traveling too fast for the conditions and won't be checking its Facebook page when the kids jump out in front, so its far more likely to simply stop in time. A big issue with self-driving cars is going to be getting their owners to put up with their safe, if not over-cautious, driving style.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Wednesday October 12 2016, @01:43PM

    by Anonymous Coward on Wednesday October 12 2016, @01:43PM (#413426)

    The framing of the problem also ignores the real possibility of the car misidentifying something in the road as a pedestrian. If the conditions as such that there isn't enough time to avoid the accident, then there may not be enough time to properly identify what is in the road.

    Should self-driving cars cause kill the occupants when it encounters a false positive? What threshold is acceptable (i.e. 1% chance of killing the occupants to avoid what turns out to be a dog) and who is responsible?

    • (Score: 2) by theluggage on Thursday October 13 2016, @12:28PM

      by theluggage (1797) on Thursday October 13 2016, @12:28PM (#413856)

      and who is responsible?

      ...the maker of the car, unless the "driver" was interfering in some way. If its a hands-off self-driving car (as opposed to today's "cruise control plus" then that's the only way it could work. Thats not to say that you, the owner, aren't indirectly paying some sort of insurance premium either as part of the purchase price or via a lease/licensing fee, but the policyholder - the risk - the one who's track record affects the premiums - has to be the manufacturer. Any other solution shouldn't be touched with a bargepole: you'd be mad to take responsibility over something you couldn't control.

      NB: I suspect we'll also see the end of car "ownership" - at the very least they will be leased. Its already getting to be the case that the only financially sensible way of getting an EV, let alone an autonomous one, is by leasing, even if you have the cash (I don't like that but suspect it will be the case...)