Should a self-driving car kill the baby or the grandma? Depends on where you're from.:
In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.
A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.
[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.
"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."
Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)
(Score: 2) by sjames on Friday January 29 2021, @08:00PM (3 children)
The fact that there are auto accidents where only one driver is found to be at fault demonstrates otherwise. We can presume that the AI can obey all traffic laws and so not be at fault in an accident but circumstances beyond it's control (most but not all caused by a human driver in another car) may put it in a situation where a collision with something or someone is inevitable. Unlike a human driver who may freeze up, simply not have time to react, or just stomp the brake and hope for the best because they don't have time to make a real decision the AI will at least sometimes have a choice to make. That choice will be constrained by physics rather than time to think.
In many ways it will be more akin to a human piloting a large ship, plenty of time to think about it but with responses constrained by physical possibility. Maritime law is written with that assumption, traffic laws are not. A license to pilot a supertanker is also a lot harder to get than a drivers license. Of course, most of the time a supertanker operates on open ocean with miles of visibility. They move much slower in other conditions.
Likewise, imagine if a train's top speed was constrained by it's ability to stop short of the next road bed if it gets blocked by road traffic at any point.
Perhaps a ferinstance will help. One night I was traveling just below the speed limit (45 MPH) on a 4 lane road with good conditions and someone ran a stop sign, then stopped blocking both lanes in the direction I was traveling. By pure luck there was no traffic in the other direction so I managed to get into a lane for the other direction and slide past the offending pickup truck (missed it by about 2 feet), then regain control and get back in my lane before approaching traffic got too close. Note that it was sheer luck that the oncoming traffic was far away at that instant. Otherwise a collision was inevitable. To repeat, I was obeying all traffic laws in good conditions at the time and it was just by luck that the opposing lanes were open. It was a judgement call that I would be able to get back in my lane in time. What is it you think I was doing wrong?
(Score: 2) by slinches on Friday January 29 2021, @10:39PM (2 children)
If you couldn't see the vehicle approaching the intersection clearly enough to see that he wasn't going to stop in time, then the intersection should be looked at to determine if some hedges need to be cleared, different signage should be installed and whether the speed limit should be lower in that area to ensure sufficient stopping distance. It may be possible that your area considers the risk of the accidents there acceptable as-is, but that doesn't mean driverless cars should be programmed to slalom through if it were presented with the same situation you were. As you said, it was luck that there wasn't oncoming traffic. It's also luck that the other driver didn't realize his mistake and try to accelerate. You could have also flipped or spun out into a pole or oncoming traffic that then couldn't avoid you. So even if you came out of the situation cleanly this time, that doesn't mean it was necessarily the best decision to make. Either way, in your scenario, it doesn't require knowing anything beyond whether there is a clear alternative path and if the maneuver is safe to perform. If both of those are true, avoid the collision. If either are not true, then brake and try to minimize the damage.
(Score: 2) by sjames on Saturday January 30 2021, @02:15AM (1 child)
The point is, neither I nor a future AI have any ability to control the other driver, the maintenance on the hedges, the design of blind side streets, oncoming traffic, etc. I had to make a decision based on what I could see then and there based on the already far from ideal decision the other driver made.
It's fortunate that I know how to take a car into and out of a slide and lucky that the traffic facilitated that. Had I T-boned the truck it would have been a very bad accident involving the light truck flipping up and over my hood (I was driving a '77 Olds). Mindlessly braking would have probably lead to 3 fatalities (mine included). My next best option had the oncoming traffic been closer would have been to slide sideways into the truck hitting it with my passenger side (I had no passengers).
You can't have it both ways. You have now argued both that speed limits don't need to be lowered and that they do need to be lowered. Show me a stretch of road with no side streets and I'll show you a truck losing it's load, a child riding a bicycle where he shouldn't, a jaywalking deer, and reality otherwise laughing maniacally and saying hold my beer.
(Score: 2) by slinches on Saturday January 30 2021, @06:32AM
My point was that if everyone is following the rules of the road, the laws already in place are pretty effective at preventing the need to make a decision of whether to hit the baby or grandma. The occurance is rare enough that programming in different values based on characteristics of the people is not necessary. At most it should be deciding between the bushes and a pedestrian. That's a complicated enough problem without having to bring in the morally questionable concept of placing the value of one person's life over another's.