Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday May 29 2017, @06:55AM   Printer-friendly
from the deadly-trolleys dept.

A new study suggests that smartphone users may be more apt to employ utilitarian reasoning in resolving moral problems, rather than adhering to absolute moral principles.

The study, which is published in Computers in Human Behavior, is one of the first studies into the impact of the digital age on moral judgments, and suggests that moral judgments depend on the digital context in which a dilemma is presented and could have significant implications for how we interact with computers.

To investigate how moral judgements are affected by smartphones and PCs, the researchers recruited 1,010 people and presented them with a classic moral dilemma known as the 'Trolley Problem'.

The Trolley Problem typically involves a runaway trolley that will kill a certain number of people on the tracks, unless some action is taken. (It has recently come to broader attention in discussions of the ethics of autonomous vehicles.) In the original version, a switch is present that will allow the trolley to be diverted; but in doing so, it will kill an otherwise innocent bystander who is on the diversion track. In the so-called "fat man" variant, the dilemma allows the possibility of pushing an obese man in front of the trolley to stop it and save a larger number of people down the line.

Before reading further, stop for a moment to think of what you would do.

Studies generally show that many people use utilitarian reasoning and flip the switch in the first scenario to save the larger number of people. But fewer people in studies are generally willing to push the fat man onto the tracks. Philosophers consider this latter response to be a type of deontological reasoning, which values a moral principle above utilitarian calculations (i.e., it is wrong to murder someone, even to save others).

In the new study, participants were required to have both a smartphone and PC to participate. They were randomly assigned to use one or the other for the experiment. There was no statistically significant difference between their responses for the "switch" scenario to the trolley problem (80.9% for the smartphone users vs. 76.9% for the PC users), but a significantly larger number of smartphone users were willing to sacrifice the fat man (33.5% vs. 22.3% for PC users). When under time pressure in a follow-up experiment with 250 new participants, the fat man scenario difference increased (45.7% for smartphone users vs. 20% for PC users).

Dr Albert Barque-Duran, a researcher from the Department of Psychology at City, University of London and lead author of the study, said:

"What we found in our study is that when people used a smartphone to view classic moral problems, they were more likely to make more unemotional, rational decisions when presented with a highly emotional dilemma. This could be due to the increased time pressures often present with smartphones and also the increased psychological distance which can occur when we use such devices compared to PCs.

"Due to the fact that our social lives, work and even shopping takes place online, it is important to think about how the contexts where we typically face ethical decisions and are asked to engage in moral behaviour have changed, and the impact this could have on the hundreds of millions of people who use such devices daily."

Perhaps due to the lead author's characterization of utilitarian reasoning as "rational," a number of news outlets have portrayed the study as concluding that smartphone users are "more rational." (See, for example, coverage at The Daily Mail and Engadget.) However, the conclusion of the full study challenges that idea, noting that the enhanced distinction for smartphone users under time pressure does not accord with the theory that avoiding killing the fat man is only a quick "gut reaction" governed by emotions.

Alternatively, in the past some have argued that trolley problem research is flawed anyway because many respondents find the scenarios silly and may not take them seriously.

Link to original study


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by theluggage on Monday May 29 2017, @01:48PM (3 children)

    by theluggage (1797) on Monday May 29 2017, @01:48PM (#517129)

    Alternatively, in the past some have argued that trolley problem research is flawed anyway because many respondents find the scenarios silly and may not take them seriously.

    You don't say?

    Silly because you are given oracular knowledge of the situation and outcome of your actions. To take the action that saves the maximum number of lives is an absolute no-brainer when you have perfect knowledge of what the outcome would be. In real life, the problem will almost always be are you sure of the facts - what if you're wrong?

    Heck, the problem is literally "on rails" - what better metaphor for absolute determinism could you wish for?

    (In the autonomous vehicle case, the *correct* answer is probably, always going to be "keep control of the vehicle at all costs" because the computer has very limited understanding of the surroundings and a ton of tumbling, burning metal is guaranteed to make any situation worse).

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by AthanasiusKircher on Monday May 29 2017, @03:22PM (2 children)

    by AthanasiusKircher (5291) on Monday May 29 2017, @03:22PM (#517169) Journal

    To take the action that saves the maximum number of lives is an absolute no-brainer when you have perfect knowledge of what the outcome would be.

    And yet, people make very different choices depending on how the problem is framed, e.g., studies consistently show a large majority tend to say to throw the switch, but only a small minority say deliberately killing a person is okay (even in more realistic scenarios than the "fat man"). Even though the outcomes are the same, a large number of people seem to perceive a moral difference.

    • (Score: 2) by theluggage on Monday May 29 2017, @04:30PM (1 child)

      by theluggage (1797) on Monday May 29 2017, @04:30PM (#517201)

      Although Mr Logic says pulling the switch and sending the train to flatten the innocent bystander would be just as much "deliberately killing someone" as pushing the fat man, I don't think its particularly profound or surprising that people claim to be more reluctant to kill someone with their own hands than to kill someone by proxy.

      Heck, I wouldn't be surprised if a court decided that the two cases represented different "degrees" of homicide!

      However, my argument is that in any "real world" situation, all these considerations are going to be swamped by the uncertainty of what the outcome of each action is likely to be and the fear of killing an innocent person in vain.

      The Autonomous Car versions are usually dreamed up by someone who has read way too much Asimov and has a completely unrealistic view of the ability of a simple machine learning system to "understand" a situation and predict consequences.

      • (Score: 2) by AthanasiusKircher on Monday May 29 2017, @11:31PM

        by AthanasiusKircher (5291) on Monday May 29 2017, @11:31PM (#517371) Journal

        I'm not disagreeing that uncertainty is a significant issue. I think part of the problem is that these scenarios are taken out of the context of a much larger argument that was trying to home in on how our moral "intuitions" work in different contexts that seem somewhat "equivalent" on the surface. It isn't just the fear of potentially killing someone "in vain" that's the issue here. If you really want to know the context, here's one of the original articles [ucsd.edu] that proposed these "thought experiments."