Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday January 28 2021, @10:04AM   Printer-friendly
from the dealer's-choice? dept.

Should a self-driving car kill the baby or the grandma? Depends on where you're from.:

In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.

The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.

A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.

[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.

"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."

Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by slinches on Thursday January 28 2021, @08:40PM (14 children)

    by slinches (5049) on Thursday January 28 2021, @08:40PM (#1106276)

    No one has to train AI to make these decisions because real world problems don't follow the rules of a thought experiment. If there's time to classify the various objects on the path and determine their relative moral value or social utility, there should have been time to avoid killing anyone in the first place. Managing the circumstances and making sure your speed is slow enough to prevent such problems is a key part of safe driving. If any assignment of weighting the value of objects is needed, it shouldn't go any further than assigning a higher priority for avoidance of things that are likely to be pedestrians.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Thursday January 28 2021, @10:19PM

    by Anonymous Coward on Thursday January 28 2021, @10:19PM (#1106312)

    ^

    Although for fast computer it may be possible to classify objects and make a value judgment. However it should never be a trolley problem, AI should simply try to evade and slow down even if it means colliding with another vehicle.

  • (Score: 2) by sjames on Thursday January 28 2021, @10:52PM (12 children)

    by sjames (2882) on Thursday January 28 2021, @10:52PM (#1106327) Journal

    Not necessarily. Speeding up the classification and judgement doesn't change the physics. Since we are doing train analogies, the engineer SEES the tanker truck stop on the tracks, but no matter how quickly he applies the brakes, the train will not stop before it hits the truck. The time to hit the brakes to stop short of the roadbed passed before the truck even got there. The correct "decision" to avoid collision is for the truck to not break down over the track, but that's not on offer. There really are cases where the AI can't avoid hitting something or someone, it's just a matter of what it decides to hit. It has more freedom to maneuver than the train, but it still can't escape momentum vs. the coefficient of friction.

    Kinda like those problems in high school physics where you must decide to accelerate or brake to clear the train tracks and it turns out there are no real roots for the resulting quadratic equation.

    • (Score: 2) by slinches on Friday January 29 2021, @12:16AM (10 children)

      by slinches (5049) on Friday January 29 2021, @12:16AM (#1106365)

      My point was that any driver (including an AI) should be driving at a speed where any unexpected obstacle in your path is either avoidable without collateral damage or the collision with the intruding obstacle is unavoidable. It doesn't matter how fast you can evaluate the value of the objects. You should never be in a situation that calls for that.

      • (Score: 2) by sjames on Friday January 29 2021, @12:31AM (9 children)

        by sjames (2882) on Friday January 29 2021, @12:31AM (#1106373) Journal

        So a universal 25 MPH speed limit? Not likely to happen. Society has decided otherwise and the AI designers don't have the power to change that, so they're stuck with the trolly problem.

        • (Score: 2) by slinches on Friday January 29 2021, @01:00AM (8 children)

          by slinches (5049) on Friday January 29 2021, @01:00AM (#1106395)

          You don't need a 25mph speed limit everywhere. That's why there are different types of roadways with different designs, access, sight lines and speed limits.

          • (Score: 2) by sjames on Friday January 29 2021, @01:13AM (7 children)

            by sjames (2882) on Friday January 29 2021, @01:13AM (#1106404) Journal

            You do if you want a guarantee that the AI can ALWAYS avoid a serious collision. If slamming the brakes on will not bring the car to a stop nearly instantly, there is a non-zero chance of an unavoidable collision that could result in serious injury or fatality, including cases where a choice related to the trolly problem arises.

            • (Score: 2) by slinches on Friday January 29 2021, @03:10AM (6 children)

              by slinches (5049) on Friday January 29 2021, @03:10AM (#1106444)

              I didn't say it had to go slow enough to prevent any serious collision. Although, if it's going fast enough that it can make a choice of what to hit but not be able to stop in time, it's going too fast for the conditions.

              • (Score: 2) by sjames on Friday January 29 2021, @07:14AM (5 children)

                by sjames (2882) on Friday January 29 2021, @07:14AM (#1106509) Journal

                Yes, so 25MPH for a human driver. Any faster and it is going fast enough that it may be unable to avoid a collision. Your reaction to the idea of a univesal 25MPH limit is exactly why as a society we have decided to accept some risk for expediency. While an AI might be safe at more like 30, that's about it.

                • (Score: 2) by slinches on Friday January 29 2021, @03:44PM (4 children)

                  by slinches (5049) on Friday January 29 2021, @03:44PM (#1106611)

                  There's always a risk of an unavoidable collision just based purely on physics. If an object enters the path so near in front of a moving vehicle that the vehicle cannot avoid hitting it, there's nothing to do except brake as much as you can to minimize the damage. That is a fundamental risk of vehicle travel in public rights of way that is understood and mitigated to a socially acceptable degree where the speed limits are set by the design of the road and the various traffic and access laws. So speed limits are already account for those risks in balance with the efficiency and expediency of transportation.

                  What I was saying about the trolley problem like scenarios is that they should never be an issue if you are following the existing laws and adjusting speed for adverse conditions (including poor sight lines that would obscure objects entering the roadway).

                  • (Score: 2) by sjames on Friday January 29 2021, @08:00PM (3 children)

                    by sjames (2882) on Friday January 29 2021, @08:00PM (#1106681) Journal

                    The fact that there are auto accidents where only one driver is found to be at fault demonstrates otherwise. We can presume that the AI can obey all traffic laws and so not be at fault in an accident but circumstances beyond it's control (most but not all caused by a human driver in another car) may put it in a situation where a collision with something or someone is inevitable. Unlike a human driver who may freeze up, simply not have time to react, or just stomp the brake and hope for the best because they don't have time to make a real decision the AI will at least sometimes have a choice to make. That choice will be constrained by physics rather than time to think.

                    In many ways it will be more akin to a human piloting a large ship, plenty of time to think about it but with responses constrained by physical possibility. Maritime law is written with that assumption, traffic laws are not. A license to pilot a supertanker is also a lot harder to get than a drivers license. Of course, most of the time a supertanker operates on open ocean with miles of visibility. They move much slower in other conditions.

                    Likewise, imagine if a train's top speed was constrained by it's ability to stop short of the next road bed if it gets blocked by road traffic at any point.

                    Perhaps a ferinstance will help. One night I was traveling just below the speed limit (45 MPH) on a 4 lane road with good conditions and someone ran a stop sign, then stopped blocking both lanes in the direction I was traveling. By pure luck there was no traffic in the other direction so I managed to get into a lane for the other direction and slide past the offending pickup truck (missed it by about 2 feet), then regain control and get back in my lane before approaching traffic got too close. Note that it was sheer luck that the oncoming traffic was far away at that instant. Otherwise a collision was inevitable. To repeat, I was obeying all traffic laws in good conditions at the time and it was just by luck that the opposing lanes were open. It was a judgement call that I would be able to get back in my lane in time. What is it you think I was doing wrong?

                    • (Score: 2) by slinches on Friday January 29 2021, @10:39PM (2 children)

                      by slinches (5049) on Friday January 29 2021, @10:39PM (#1106727)

                      What is it you think I was doing wrong?

                      If you couldn't see the vehicle approaching the intersection clearly enough to see that he wasn't going to stop in time, then the intersection should be looked at to determine if some hedges need to be cleared, different signage should be installed and whether the speed limit should be lower in that area to ensure sufficient stopping distance. It may be possible that your area considers the risk of the accidents there acceptable as-is, but that doesn't mean driverless cars should be programmed to slalom through if it were presented with the same situation you were. As you said, it was luck that there wasn't oncoming traffic. It's also luck that the other driver didn't realize his mistake and try to accelerate. You could have also flipped or spun out into a pole or oncoming traffic that then couldn't avoid you. So even if you came out of the situation cleanly this time, that doesn't mean it was necessarily the best decision to make. Either way, in your scenario, it doesn't require knowing anything beyond whether there is a clear alternative path and if the maneuver is safe to perform. If both of those are true, avoid the collision. If either are not true, then brake and try to minimize the damage.

                      • (Score: 2) by sjames on Saturday January 30 2021, @02:15AM (1 child)

                        by sjames (2882) on Saturday January 30 2021, @02:15AM (#1106783) Journal

                        The point is, neither I nor a future AI have any ability to control the other driver, the maintenance on the hedges, the design of blind side streets, oncoming traffic, etc. I had to make a decision based on what I could see then and there based on the already far from ideal decision the other driver made.

                        It's fortunate that I know how to take a car into and out of a slide and lucky that the traffic facilitated that. Had I T-boned the truck it would have been a very bad accident involving the light truck flipping up and over my hood (I was driving a '77 Olds). Mindlessly braking would have probably lead to 3 fatalities (mine included). My next best option had the oncoming traffic been closer would have been to slide sideways into the truck hitting it with my passenger side (I had no passengers).

                        You can't have it both ways. You have now argued both that speed limits don't need to be lowered and that they do need to be lowered. Show me a stretch of road with no side streets and I'll show you a truck losing it's load, a child riding a bicycle where he shouldn't, a jaywalking deer, and reality otherwise laughing maniacally and saying hold my beer.

                        • (Score: 2) by slinches on Saturday January 30 2021, @06:32AM

                          by slinches (5049) on Saturday January 30 2021, @06:32AM (#1106824)

                          My point was that if everyone is following the rules of the road, the laws already in place are pretty effective at preventing the need to make a decision of whether to hit the baby or grandma. The occurance is rare enough that programming in different values based on characteristics of the people is not necessary. At most it should be deciding between the bushes and a pedestrian. That's a complicated enough problem without having to bring in the morally questionable concept of placing the value of one person's life over another's.

    • (Score: 2) by hendrikboom on Friday January 29 2021, @08:09PM

      by hendrikboom (1125) on Friday January 29 2021, @08:09PM (#1106684) Homepage Journal

      The correct "decision" to avoid collision is for the truck to not break down over the track, but that's not on offer.

      When I took my driving lessons in the 1960's I learned that in Manitoba it is against the law to change gears while crossing a railway track. I guess they were trying to prevent such problems.