Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday January 28 2021, @10:04AM   Printer-friendly
from the dealer's-choice? dept.

Should a self-driving car kill the baby or the grandma? Depends on where you're from.:

In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.

The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.

A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.

[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.

"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."

Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by slinches on Friday January 29 2021, @10:39PM (2 children)

    by slinches (5049) on Friday January 29 2021, @10:39PM (#1106727)

    What is it you think I was doing wrong?

    If you couldn't see the vehicle approaching the intersection clearly enough to see that he wasn't going to stop in time, then the intersection should be looked at to determine if some hedges need to be cleared, different signage should be installed and whether the speed limit should be lower in that area to ensure sufficient stopping distance. It may be possible that your area considers the risk of the accidents there acceptable as-is, but that doesn't mean driverless cars should be programmed to slalom through if it were presented with the same situation you were. As you said, it was luck that there wasn't oncoming traffic. It's also luck that the other driver didn't realize his mistake and try to accelerate. You could have also flipped or spun out into a pole or oncoming traffic that then couldn't avoid you. So even if you came out of the situation cleanly this time, that doesn't mean it was necessarily the best decision to make. Either way, in your scenario, it doesn't require knowing anything beyond whether there is a clear alternative path and if the maneuver is safe to perform. If both of those are true, avoid the collision. If either are not true, then brake and try to minimize the damage.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by sjames on Saturday January 30 2021, @02:15AM (1 child)

    by sjames (2882) on Saturday January 30 2021, @02:15AM (#1106783) Journal

    The point is, neither I nor a future AI have any ability to control the other driver, the maintenance on the hedges, the design of blind side streets, oncoming traffic, etc. I had to make a decision based on what I could see then and there based on the already far from ideal decision the other driver made.

    It's fortunate that I know how to take a car into and out of a slide and lucky that the traffic facilitated that. Had I T-boned the truck it would have been a very bad accident involving the light truck flipping up and over my hood (I was driving a '77 Olds). Mindlessly braking would have probably lead to 3 fatalities (mine included). My next best option had the oncoming traffic been closer would have been to slide sideways into the truck hitting it with my passenger side (I had no passengers).

    You can't have it both ways. You have now argued both that speed limits don't need to be lowered and that they do need to be lowered. Show me a stretch of road with no side streets and I'll show you a truck losing it's load, a child riding a bicycle where he shouldn't, a jaywalking deer, and reality otherwise laughing maniacally and saying hold my beer.

    • (Score: 2) by slinches on Saturday January 30 2021, @06:32AM

      by slinches (5049) on Saturday January 30 2021, @06:32AM (#1106824)

      My point was that if everyone is following the rules of the road, the laws already in place are pretty effective at preventing the need to make a decision of whether to hit the baby or grandma. The occurance is rare enough that programming in different values based on characteristics of the people is not necessary. At most it should be deciding between the bushes and a pedestrian. That's a complicated enough problem without having to bring in the morally questionable concept of placing the value of one person's life over another's.