Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday October 06 2022, @08:27AM   Printer-friendly
from the we-can't-mod-up-together-with-suspicious-minds dept.

A person's distrust in humans predicts they will have more trust in artificial intelligence's ability to moderate content online:

A person's distrust in humans predicts they will have more trust in artificial intelligence's ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.

"We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI's classification," said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. "Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias."

The study, published in the journal of New Media & Society also found that "power users" who are experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe that machines lack the ability to detect nuances of human language.

[...] "One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us," said Maria D. Molina, an assistant professor of communication arts and sciences at Michigan State University, and the first author of this paper. [...]

"A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems," said Sundar, who is also director of Penn State's Center for Socially Responsible Artificial Intelligence. "Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process."

Journal Reference:
Molina, M. D., & Sundar, S. S. (2022). Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation. New Media & Society, 2022. 10.1177/14614448221103534


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by RedGreen on Thursday October 06 2022, @05:46PM

    by RedGreen (888) on Thursday October 06 2022, @05:46PM (#1275284)

    "Until I encountered an automated support agent for the first time. Since then human beings don't look so bad after all..."

    Indeed first thing I do on encountering them things is see if I can find the talk to human option because I know my issue will never be resolved by so fucking useless machine. At last with a person you have a slim hope in hell of getting it done.

    --
    "I modded down, down, down, and the flames went higher." -- Sven Olsen
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3