Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday January 28 2021, @10:04AM   Printer-friendly
from the dealer's-choice? dept.

Should a self-driving car kill the baby or the grandma? Depends on where you're from.:

In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.

The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.

A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.

[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.

"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."

Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by slinches on Thursday January 28 2021, @09:16PM

    by slinches (5049) on Thursday January 28 2021, @09:16PM (#1106288)

    True, the types of philosophical and moral dilemmas that these sorts of thought experiments are for is to explore and find a greater understanding of how context can change where your personal philosophies are with respect to the value of life, utilitarianism and doing vs. allowing harm. That gives one a much better perspective on how to break down their own conceptions of morality and understand where those come from.

    One of the comparisons that I like is between the traditional trolley problem set up as one person vs. five compared with the scenario where a surgeon has the option to kill and harvest the organs of one healthy person to save 5 others who need transplants. Most of the people who are exposed to these for the first time will have different answers and that opens up a lot of avenues to explore why they think they are different.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2