Should a self-driving car kill the baby or the grandma? Depends on where you're from.:
In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.
A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.
[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.
"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."
Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)
(Score: 3, Interesting) by looorg on Thursday January 28 2021, @11:58AM (3 children)
Why isn't there an option that the driver should just kill themselves? Instead of hitting grandma or the baby just turn the car and hit the brick wall or drive of the cliff or whatnot. Perhaps that would be the lesser evil in the scenario.
Some version of Asimovs laws of robotics (and hence AI) should put it's own perseverance below that of any human life.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @12:16PM
Yup, selfdestruct is the right option.
(Score: 2) by krishnoid on Thursday January 28 2021, @05:12PM
That is a choice in some cases. If you visit the Moral Machine [moralmachine.net] site, you can actually put the car through various scenarios and make the choices yourself. Hey, if humans can't make these decisions, what are the odds that a car can?
There's also a similar site for evaluating your perspectives on charitable donations [my-goodness.net].
(Score: 2) by tangomargarine on Thursday January 28 2021, @07:09PM
Because good luck trying to sell that product? "In case of emergency, the vehicle will choose to kill you first"
This is America, the land of Fuck You Got Mine.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"