Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday January 28 2021, @10:04AM   Printer-friendly
from the dealer's-choice? dept.

Should a self-driving car kill the baby or the grandma? Depends on where you're from.:

In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.

The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.

A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.

[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.

"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."

Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Grishnakh on Friday January 29 2021, @06:07PM

    by Grishnakh (2831) on Friday January 29 2021, @06:07PM (#1106652)

    If the grandma were going to grow up to be a serial killer, she probably already would have. The baby has insanely higher odds of becoming a serial killer in what remains of its life, so it's the logical choice to sqoosh.

    While you're certainly correct that the baby has much higher odds of becoming a serial killer, how do you know the grandma isn't a serial killer? Some serial killers go for years or even decades without being caught.

    However, this is one place where you could argue in favor of outright sexism: if it's a choice between a boy of any age and a girl of any age, and you want to take out the one most likely to be a serial killer, definitely go for the boy. Female serial killers are almost non-existent. This is also a place where bald-faced racism can be used: if it's a choice between a black person and a white person, definitely take out the white person, because white men are *by far* the most likely to become serial killers. I'm not sure there's ever been a recorded case of a black serial killer.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2