Should a self-driving car kill the baby or the grandma? Depends on where you're from.:
In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people's decisions on how self-driving cars should prioritize lives in different variations of the "trolley problem." In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
The researchers never predicted the experiment's viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.
A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.
[...] Awad hopes the results will also help technologists think more deeply about the ethics of AI beyond self-driving cars. "We used the trolley problem because it's a very good way to collect this data, but we hope the discussion of ethics don't stay within that theme," he said. "The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who's going to die or not, and also about how bias is happening." How these results could translate into the more ethical design and regulation of AI is something he hopes to study more in the future.
"In the last two, three years more people have started talking about the ethics of AI," Awad said. "More people have started becoming aware that AI could have different ethical consequences on different groups of people. The fact that we see people engaged with this—I think that that's something promising."
Journal Reference:
Edmond Awad, Sohan Dsouza, Richard Kim, et al. The Moral Machine experiment, Nature (DOI: 10.1038/s41586-018-0637-6)
(Score: 5, Insightful) by bradley13 on Thursday January 28 2021, @02:37PM (10 children)
Honestly, I think this is all a non-issue. I understand that people like to philosophize about it, and I'm sure it makes ambulance-chasing lawyers salivate. But consider just how far down the priority list this really is:
How many of you have had an accident? Of those, how many involved serious bodily injury? Of those, how many were simply avoidable - i.e., were caused by human error? That's where the big win is, with self-driving cars.
How many people have ever been in an accident where they had a "trolley-problem" choice? I.e., an accident was unavoidable, someone was going to get hurt, and there was actually a chance to pick the victim? The number is vanishingly small, and that's not going to change with self-driving cars.
Everyone is somebody else's weirdo.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @03:51PM (5 children)
You are right. This has nothing to do with tech and is just navel gazing, but look at the source:
"researchers at the MIT Media Lab".
The MIT Media Lab is an island of fluff and pontification in an otherwise hard science/engineering university. I think it's called the Media Lab because all their "work" is purposed towards getting attention from the popular media.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @04:15PM (3 children)
> "researchers at the MIT Media Lab".
Media Lab just announced a new director, https://finance.yahoo.com/news/mit-media-lab-names-dava-190447292.html [yahoo.com]
Among other things, she designed one of the new generations of space suit.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @04:44PM (2 children)
AND she's a woman! There really was never going to be another man picked to head the woke Media Lab.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @07:04PM (1 child)
Aaaaand that explains the other quote
palpable disdain
(Score: 0) by Anonymous Coward on Friday January 29 2021, @04:20AM
Well we all know "women and children first" is not really how it works. That's what rich dicks say about once a year to prove they are gentlemen and the rest of the time it's underage rape on private islands am I even joking no.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @07:01PM
Oh so I guess that explains the quote above
what a mellifluous turlingdrome
(Score: 2) by legont on Thursday January 28 2021, @11:23PM (2 children)
People who actually know how to drive - and I know and asked some - behave like this. I am just giving an example here:
Somebody or something human size runs across the way. When they are alone they break hard. If they have their children on the back seats, they don't break at all but go for a controlled crash using the softest obstacle available. If is is a pedestrian, so be it. They don't want to be hit from behind and their children to be injured.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2) by arslan on Friday January 29 2021, @04:44AM (1 child)
Realistically though, situations like these are almost always unexpected and sudden enough that most people will be driven by their reflex no matter what they would hypothetically do on paper.
Having such thought experiments are fun, but they should never really be a pre-requisite for self driving cars. The default for self driving cars should always be to brake in as short a time as possible while maintaining passenger safety - if it can't find alternate course free of collision.
The minimum bar should also be that the processing power for the computer surpass that of a human in those split second scenarios and also the information gathering of the surrounding surpass that of a human with normal eyesight in all conditions. Otherwise no self-driving cars should be on the road if it has worse reaction & sensory capability than a normal healthy human.
(Score: 2) by The Mighty Buzzard on Friday January 29 2021, @04:12PM
See, you've shot your own argument down there. Reflex does not require much of any processing power and you don't have much of any time to think during the moment. Which is why good drivers do not rely on thinking during the moment. They think ahead of time. They assume everyone else around them are inattentive morons and act/plan accordingly.
My rights don't end where your fear begins.
(Score: 2) by The Mighty Buzzard on Friday January 29 2021, @04:06PM
No, but accountability will. There will be none unless the AI malfunctioned. And even then there will be no chance of anyone going to jail unless they knew it was defective ahead of time.
My rights don't end where your fear begins.
(Score: 2, Funny) by Anonymous Coward on Thursday January 28 2021, @02:41PM (1 child)
... the car needs to scan the facebook pages of the grandma and the baby. Whoever has more kitten videos wins.
(Score: 2) by looorg on Thursday January 28 2021, @02:57PM
So it will become sort of like a deathmatch then, they could stream it for profit!
(Score: 3, Interesting) by legont on Thursday January 28 2021, @02:42PM (3 children)
A driving AI sees a small animal running across the street. It estimates that it is a baby bear with 99% and a baby human with 1%. An emergency braking, it estimates again, would kill the passengers (should it conciser the ages of those as well?) with 1% too.
It decides to avoid braking and so, it happened, kills a child.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @04:03PM
An old person that can't hardly see any more plows into a bus stop killing 20 people. Sad.
In other news, AR-15 sales are up 200% amid growing claims of stolen election.
(Score: 3, Informative) by Common Joe on Friday January 29 2021, @01:14PM (1 child)
Probabilities would be nice, but unfortunately they won't be there. Some types of AI works on probabilities and can give you reasons for their decisions, but neural networks cannot. Neural networks are trained on specific scenarios and cannot give a reason as to why they chose one thing over another. They simply know through their previous training scenarios that they should make a decision similar to the training they've had.
(Score: 2) by legont on Friday January 29 2021, @10:26PM
No, that's not the case.
In general, a probability in real life arises as a result of averaging many deterministic outcomes. The same way neural network decisions may look based on training, but in reality are a chance over training.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 3, Touché) by inertnet on Thursday January 28 2021, @03:14PM
What if grandma is the only caretaker for the baby? If Mr/Ms AI kills grandma, her grandchild will die as well.
(Score: 2) by Runaway1956 on Thursday January 28 2021, @03:35PM (6 children)
Just curious, really. Who has been there, and done that?
Abortion is the number one killed of children in the United States.
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @04:07PM
Well it's not the same but one time I had the choice to bag yo momma or yo sista. Tough call but yo momma promised anal so no choice really.
(Score: 2) by DannyB on Thursday January 28 2021, @05:59PM (4 children)
Exceedingly few people if any.
But the question reveals things about the ethical character of the person. Similar to asking whether they use vi or emacs.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 1, Funny) by Anonymous Coward on Thursday January 28 2021, @06:32PM (2 children)
No, just no!
> Similar to asking whether they use vi or emacs.
We do car analogies here, text editor analogies are NOT welcome.
(Score: 2) by DannyB on Thursday January 28 2021, @06:53PM (1 child)
Trump, Jonestown, Hitler illustrate that people can become irrationally locked into a view which cannot be questioned. All rational thought is abandoned. The true way is the only way. You will observe this if you try talking to an emacs user.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 0) by Anonymous Coward on Friday January 29 2021, @01:26AM
> You will observe this if you try talking to an emacs user.
Tried the rest,
stuck with the best,
...
...
Burma Shave!
(Started using emacs c.1976, lots to memorize, lots of productivity after that.)
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @08:36PM
But do you use vi, nvi, vim, neovim, or vis? PCRE? What about tabwidth?
(Score: 2) by DannyB on Thursday January 28 2021, @05:58PM (1 child)
Should you honk the horn to warn them? Or not?
Maybe they wouldn't be able to get out of the way, so why honk.
If you think a fertilized egg is a child but an immigrant child is not, please don't pretend your concerns are religious
(Score: 2) by tangomargarine on Thursday January 28 2021, @06:57PM
Because of the chance that they can?
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 1, Funny) by Anonymous Coward on Thursday January 28 2021, @06:17PM
Multi-track drifting!
(Score: 0) by Anonymous Coward on Thursday January 28 2021, @06:28PM
Wanna know which regions go which way? I RTFA so you don't have to.
The case for killing grandma is obvious, but RAH laid it out better than I can:
France, and to a lesser extent Greece, uphold that standard, with a strong preference to save the baby.
And the case for ambivalence -- i.e., deciding these cases on other factors -- is equally clear; you think those other factors, adornment, luxury or folly though they may be, are more relevant here. After all, we're talking about one car wreck, not a racial existential crisis; there's no call to dump them just yet.
You'll find most European countries (and the US, Canada, etc.) clustered around zero.
But what kind of insectoid culture would actively prefer killing babies to save the elderly?
I mean, when you put it like that it becomes pretty obvious, but yeah. The answer is asian bugmen.
Singapore less so, China and Taiwan more, but all the asian countries listed are solidly on the save-grandma side, sacrificing the future for the past.
(Score: 2) by srobert on Friday January 29 2021, @01:47AM
Back in 1976 my 13 year old self played quite a bit of Death Race. It was an arcade video game. In my experience it should kill the Grandma first, and then go back for the baby for the extra points.
(Score: 1, Touché) by Anonymous Coward on Friday January 29 2021, @07:51PM
They'll do whatever their manufacturers think will cost them the least. Look what happened with the 737 Max because Boeing wanted to save money.
The safe easy thing for a self driving car to do is to try to stop "when stuff happens". Yeah that means bad stuff can happen if it stops at a railway crossing for too long but in most scenarios the occupants should get out.
If the road is detected to be slippery, limit the speed and increase the braking distance estimates. If the road is too icy for the tyres (wrong tyres for icy conditions) then don't even move or start. Getting sued for not moving is usually less costly than getting sued for moving and killing someone.