Should a self-driving car kill the baby or the grandma? Depends on where you're from.:
In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.
A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.
[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.
"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."
Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @05:00PM (22 children)
Yes, yes, very cute. It's possible to use cleverness to avoid having to think about morality. +10 Internet Points
The intent of the Trolley Problem is meant to be an examination of morality, ethics, values, and everything else (including but not limited to religion). If somebody doesn't want to engage in that, then I personally disagree with their moral shortsightedness, but that's my opinion. (They could equally criticize me on my being caught up with useless pontificating.)
It could also be expanded up on to be a 2-paragraph description of the circumstances (the car was rainy, somebody sabotaged the cameras, etc, etc, etc). But that's losing the forest for the trees.
(Score: 2) by slinches on Thursday January 28 2021, @05:36PM (21 children)
What my other post (and I think the GP) is saying is that trying to apply the trolley problem this way is misguided from the start. The trolley problem is a thought experiment meant to explore various aspects of morality and personal responsibility. The way this "study" is formulated, it is just using the trolley problem as a veneer to disguise that they are really just asking which peoples lives are more or less valuable. I find that morally objectionable in itself as that data can only be used to reinforce stereotypes and further escalate the dehumanization of entire classes of people (and reinforce privileges of others).
(Score: 1, Insightful) by Anonymous Coward on Thursday January 28 2021, @05:45PM
SOMEBODY is using real critical thinking. Congrats on looking one level above the problem being presented.
(Score: 2) by sjames on Thursday January 28 2021, @06:29PM (17 children)
That might be true except that somebody has to train an AI that will actually have to make such a decision. We avoid the question when a human is driving by arguing that the speed that events unfold exceeds the speed of human thought and so leaves the human driver unable to have meaningful agency (and so it falls under act of God).
Somebody has to decide which simulated result constitutes a passing grade for the AI in training, and so will affect the decisions that AI will make in the real world with real people. It is a not the classic trolly problem, but it sure looks like a variation on the theme.
And as Rush said, if you choose not to decide you still have made a choice.
(Score: 2) by bzipitidoo on Thursday January 28 2021, @08:32PM (1 child)
The dilemma is too contrived. Look at it this way. How many times do we face such a dilemma? Once, or many times? That makes a huge difference.
If the Trolley Problem happens repeatedly, even routinely, we should ask ourselves why, and what can we do to avoid ever having to face this dilemma in the first place? Bust the criminals who tied people to the tracks so that doesn't happen any more. Make trolleys safer, perhaps by installing an emergency stop system that could be as simple as an anchor under the trolley, to be dropped to hook onto the cross ties, for an emergency stop. It'll tear up track, but if it stops the trolley in time, it's worth it. Place cameras to watch the track, etc. There will be things that can be done well in advance, to ensure that such dilemmas are never routine.
Whenever something of that sort happens with automobiles, it often results in a recall. Or reforms. Same with planes and ships. And with pretty much any industrial accident. The post mortem inevitably turns up a series of mistakes and flaws that all combined to create a tragedy, most of which are easily prevented. In many cases, the designs were undermined by management and operator neglect, trying to save a few pennies, a little effort, by skimping on the safety measures and checks. Some cases are a sort of groupthink, in which everyone is on the same side, and there was no adversary to wreck the illusions of safety everyone spun. For example, the infamous Titanic being unsinkable. More recently, the Boeing 737 Max crashes may have ultimately been a regulatory failure, in which Boeing had become dependent upon safety agencies to keep them from making safety mistakes, and when that pressure was removed, they didn't handle the regulatory relief well.
(Score: 2) by sjames on Thursday January 28 2021, @10:56PM
The solution is to restrict the speed limit to 25 MPH, but as a society we have chosen to only do that in some cases where a trolly problem is most likely and also most likely to include children. The AI designers don't have the authority to change that.
The real world gets messy as well. Thow down the anchor and tear up the track, then 15 passengers die from the injuries that result from the sudden deceleration...
(Score: 2) by slinches on Thursday January 28 2021, @08:40PM (14 children)
No one has to train AI to make these decisions because real world problems don't follow the rules of a thought experiment. If there's time to classify the various objects on the path and determine their relative moral value or social utility, there should have been time to avoid killing anyone in the first place. Managing the circumstances and making sure your speed is slow enough to prevent such problems is a key part of safe driving. If any assignment of weighting the value of objects is needed, it shouldn't go any further than assigning a higher priority for avoidance of things that are likely to be pedestrians.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @10:19PM
^
Although for fast computer it may be possible to classify objects and make a value judgment. However it should never be a trolley problem, AI should simply try to evade and slow down even if it means colliding with another vehicle.
(Score: 2) by sjames on Thursday January 28 2021, @10:52PM (12 children)
Not necessarily. Speeding up the classification and judgement doesn't change the physics. Since we are doing train analogies, the engineer SEES the tanker truck stop on the tracks, but no matter how quickly he applies the brakes, the train will not stop before it hits the truck. The time to hit the brakes to stop short of the roadbed passed before the truck even got there. The correct "decision" to avoid collision is for the truck to not break down over the track, but that's not on offer. There really are cases where the AI can't avoid hitting something or someone, it's just a matter of what it decides to hit. It has more freedom to maneuver than the train, but it still can't escape momentum vs. the coefficient of friction.
Kinda like those problems in high school physics where you must decide to accelerate or brake to clear the train tracks and it turns out there are no real roots for the resulting quadratic equation.
(Score: 2) by slinches on Friday January 29 2021, @12:16AM (10 children)
My point was that any driver (including an AI) should be driving at a speed where any unexpected obstacle in your path is either avoidable without collateral damage or the collision with the intruding obstacle is unavoidable. It doesn't matter how fast you can evaluate the value of the objects. You should never be in a situation that calls for that.
(Score: 2) by sjames on Friday January 29 2021, @12:31AM (9 children)
So a universal 25 MPH speed limit? Not likely to happen. Society has decided otherwise and the AI designers don't have the power to change that, so they're stuck with the trolly problem.
(Score: 2) by slinches on Friday January 29 2021, @01:00AM (8 children)
You don't need a 25mph speed limit everywhere. That's why there are different types of roadways with different designs, access, sight lines and speed limits.
(Score: 2) by sjames on Friday January 29 2021, @01:13AM (7 children)
You do if you want a guarantee that the AI can ALWAYS avoid a serious collision. If slamming the brakes on will not bring the car to a stop nearly instantly, there is a non-zero chance of an unavoidable collision that could result in serious injury or fatality, including cases where a choice related to the trolly problem arises.
(Score: 2) by slinches on Friday January 29 2021, @03:10AM (6 children)
I didn't say it had to go slow enough to prevent any serious collision. Although, if it's going fast enough that it can make a choice of what to hit but not be able to stop in time, it's going too fast for the conditions.
(Score: 2) by sjames on Friday January 29 2021, @07:14AM (5 children)
Yes, so 25MPH for a human driver. Any faster and it is going fast enough that it may be unable to avoid a collision. Your reaction to the idea of a univesal 25MPH limit is exactly why as a society we have decided to accept some risk for expediency. While an AI might be safe at more like 30, that's about it.
(Score: 2) by slinches on Friday January 29 2021, @03:44PM (4 children)
There's always a risk of an unavoidable collision just based purely on physics. If an object enters the path so near in front of a moving vehicle that the vehicle cannot avoid hitting it, there's nothing to do except brake as much as you can to minimize the damage. That is a fundamental risk of vehicle travel in public rights of way that is understood and mitigated to a socially acceptable degree where the speed limits are set by the design of the road and the various traffic and access laws. So speed limits are already account for those risks in balance with the efficiency and expediency of transportation.
What I was saying about the trolley problem like scenarios is that they should never be an issue if you are following the existing laws and adjusting speed for adverse conditions (including poor sight lines that would obscure objects entering the roadway).
(Score: 2) by sjames on Friday January 29 2021, @08:00PM (3 children)
The fact that there are auto accidents where only one driver is found to be at fault demonstrates otherwise. We can presume that the AI can obey all traffic laws and so not be at fault in an accident but circumstances beyond it's control (most but not all caused by a human driver in another car) may put it in a situation where a collision with something or someone is inevitable. Unlike a human driver who may freeze up, simply not have time to react, or just stomp the brake and hope for the best because they don't have time to make a real decision the AI will at least sometimes have a choice to make. That choice will be constrained by physics rather than time to think.
In many ways it will be more akin to a human piloting a large ship, plenty of time to think about it but with responses constrained by physical possibility. Maritime law is written with that assumption, traffic laws are not. A license to pilot a supertanker is also a lot harder to get than a drivers license. Of course, most of the time a supertanker operates on open ocean with miles of visibility. They move much slower in other conditions.
Likewise, imagine if a train's top speed was constrained by it's ability to stop short of the next road bed if it gets blocked by road traffic at any point.
Perhaps a ferinstance will help. One night I was traveling just below the speed limit (45 MPH) on a 4 lane road with good conditions and someone ran a stop sign, then stopped blocking both lanes in the direction I was traveling. By pure luck there was no traffic in the other direction so I managed to get into a lane for the other direction and slide past the offending pickup truck (missed it by about 2 feet), then regain control and get back in my lane before approaching traffic got too close. Note that it was sheer luck that the oncoming traffic was far away at that instant. Otherwise a collision was inevitable. To repeat, I was obeying all traffic laws in good conditions at the time and it was just by luck that the opposing lanes were open. It was a judgement call that I would be able to get back in my lane in time. What is it you think I was doing wrong?
(Score: 2) by slinches on Friday January 29 2021, @10:39PM (2 children)
If you couldn't see the vehicle approaching the intersection clearly enough to see that he wasn't going to stop in time, then the intersection should be looked at to determine if some hedges need to be cleared, different signage should be installed and whether the speed limit should be lower in that area to ensure sufficient stopping distance. It may be possible that your area considers the risk of the accidents there acceptable as-is, but that doesn't mean driverless cars should be programmed to slalom through if it were presented with the same situation you were. As you said, it was luck that there wasn't oncoming traffic. It's also luck that the other driver didn't realize his mistake and try to accelerate. You could have also flipped or spun out into a pole or oncoming traffic that then couldn't avoid you. So even if you came out of the situation cleanly this time, that doesn't mean it was necessarily the best decision to make. Either way, in your scenario, it doesn't require knowing anything beyond whether there is a clear alternative path and if the maneuver is safe to perform. If both of those are true, avoid the collision. If either are not true, then brake and try to minimize the damage.
(Score: 2) by sjames on Saturday January 30 2021, @02:15AM (1 child)
The point is, neither I nor a future AI have any ability to control the other driver, the maintenance on the hedges, the design of blind side streets, oncoming traffic, etc. I had to make a decision based on what I could see then and there based on the already far from ideal decision the other driver made.
It's fortunate that I know how to take a car into and out of a slide and lucky that the traffic facilitated that. Had I T-boned the truck it would have been a very bad accident involving the light truck flipping up and over my hood (I was driving a '77 Olds). Mindlessly braking would have probably lead to 3 fatalities (mine included). My next best option had the oncoming traffic been closer would have been to slide sideways into the truck hitting it with my passenger side (I had no passengers).
You can't have it both ways. You have now argued both that speed limits don't need to be lowered and that they do need to be lowered. Show me a stretch of road with no side streets and I'll show you a truck losing it's load, a child riding a bicycle where he shouldn't, a jaywalking deer, and reality otherwise laughing maniacally and saying hold my beer.
(Score: 2) by slinches on Saturday January 30 2021, @06:32AM
My point was that if everyone is following the rules of the road, the laws already in place are pretty effective at preventing the need to make a decision of whether to hit the baby or grandma. The occurance is rare enough that programming in different values based on characteristics of the people is not necessary. At most it should be deciding between the bushes and a pedestrian. That's a complicated enough problem without having to bring in the morally questionable concept of placing the value of one person's life over another's.
(Score: 2) by hendrikboom on Friday January 29 2021, @08:09PM
When I took my driving lessons in the 1960's I learned that in Manitoba it is against the law to change gears while crossing a railway track. I guess they were trying to prevent such problems.
(Score: 4, Informative) by aristarchus on Thursday January 28 2021, @07:29PM (1 child)
Exactly. The "trolley problem" was invented by Philosopher Philippa Foot,
https://en.wikipedia.org/wiki/Trolley_problem [wikipedia.org]
The point of a Gedankenexperiment is to highlight certain aspects of an issue to allow us to examine our intuitions. They are legion in analytic philosophy. Judith Jarvis Thomson's (she gave the trolley problem its name) "Dying Violinist", the "Lifeboat Ethics", "Drowning child scenario", or the "Innocent Aggressor". What they are not meant to do is become a survey of opinions held by humans, as this study seems to be using it for.
As Wikipedia continues:
All this begs a damp squid. Should the majority opinion on choosing between the Baby or the Grandma be considered as the proper and correct moral positions, to the point of being encased into an AI system? Madness. The point of the thought experiment is to challenge intuitions, to make them explicit and demand that they either be rationalized or discarded. So the posters above are exactly correct. This is a misuse of the trolley problem.
On the other hand, the treatment in "The Good Place" (S2E6) [wikipedia.org] is pretty insightful.
(Score: 2) by slinches on Thursday January 28 2021, @09:16PM
True, the types of philosophical and moral dilemmas that these sorts of thought experiments are for is to explore and find a greater understanding of how context can change where your personal philosophies are with respect to the value of life, utilitarianism and doing vs. allowing harm. That gives one a much better perspective on how to break down their own conceptions of morality and understand where those come from.
One of the comparisons that I like is between the traditional trolley problem set up as one person vs. five compared with the scenario where a surgeon has the option to kill and harvest the organs of one healthy person to save 5 others who need transplants. Most of the people who are exposed to these for the first time will have different answers and that opens up a lot of avenues to explore why they think they are different.