Should a self-driving car kill the baby or the grandma? Depends on where you're from.:
In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.
A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.
[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.
"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."
Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)
(Score: 2) by slinches on Thursday January 28 2021, @04:40PM (2 children)
The trolley problem isn't the issue here. It's using it out of context and for purposes it was never meant to. The classic formulation is that you are a bystander near a switch with the two tracks then placing differing people or consequences on either one to tease out a threshold for active participation. Pulling the lever to save lives with no collateral damage is always ethical. However, because you are a bystander and not the trolley driver, the ethics become more complicated once it becomes a choice between two bad outcomes. It's a question playing the difference in the value of the two choices against actively choosing to be the cause of someone's death and how much of a difference is necessary to overcome that. If you are acting as the trolley driver (or the self-driving car AI), you are already responsible for the outcome and it's just a choice of what you value more.
Of course, the study would have been seen as rather objectionable by many if it just posed the question as "which would would you rather kill?" instead of pretending it has anything to do with driverless cars or AI.
(Score: 2) by theluggage on Thursday January 28 2021, @07:44PM
Yes, regardless of my doubts about the trolley problem itself, I can get behind that. However...
...but only because the statement of the problem assures you that there will be no collateral damage.
...but in the trolley problem you're not a bystander because a bystander wouldn't magically know for certain the consequences of their choices. So my gut reaction to the trolley problem as written is that, with the knowledge you have, you're just as responsible as the driver and should probably pull the lever. However, in the vanishingly unlikely event that I really stumble across a speeding trolley and a large, friendly lever, I don't think that pulling it and hoping would be a very good idea (if nothing else, it might mean that the fat guy who some bright spark threw off the bridge is now lying broken on the wrong track :-)) - morally, it is pretty important to remember that a calamity can easily be caused by acting on false information.
The danger with some thought experiments like this is that it they look like a real-world exemplification/justification/application of a problem but are really totally unrealistic "code" for an abstract concept - so any one person's response depends on whether they're trying to put themselves into the hypothetical situation, or ignoring the situation because they learned the abstractions behind it in Philosophy 101. (Which seems to be the problem with TFA here, although they do make the point that this is really a job for risk analysis). In any remotely realistic situation, even the driver isn't going to be choosing between two certainties - it's going to be risk analysis with unknown risks, plus split-second instinct.
Arriving at abstract moral principles is easy. Agreeing on abstract moral principles has a few bumps in the road, but there's a surprising amount of consensus, considering. Applying those principles to a complex, chaotic world in which there is little certainty and a terrible shortage of unbiased data is really, really hard, and changing all the time. Fake, unrealistic applications don't help. Nor does (to the other poster) trying to deny the complexity and uncertainty to make the simple rules work: of course we can't see the woods for the trees - we're in the woods, so all we've got to go on is in the trees, and a map of the woods that doesn't show the trees.
(Score: 2) by hendrikboom on Friday January 29 2021, @08:04PM