Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday October 06 2022, @08:27AM   Printer-friendly
from the we-can't-mod-up-together-with-suspicious-minds dept.

A person's distrust in humans predicts they will have more trust in artificial intelligence's ability to moderate content online:

A person's distrust in humans predicts they will have more trust in artificial intelligence's ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.

"We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI's classification," said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. "Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias."

The study, published in the journal of New Media & Society also found that "power users" who are experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe that machines lack the ability to detect nuances of human language.

[...] "One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us," said Maria D. Molina, an assistant professor of communication arts and sciences at Michigan State University, and the first author of this paper. [...]

"A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems," said Sundar, who is also director of Penn State's Center for Socially Responsible Artificial Intelligence. "Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process."

Journal Reference:
Molina, M. D., & Sundar, S. S. (2022). Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation. New Media & Society, 2022. 10.1177/14614448221103534


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by coolgopher on Thursday October 06 2022, @08:56AM (9 children)

    by coolgopher (1157) on Thursday October 06 2022, @08:56AM (#1275193)

    “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”

    "We know the system is at least bad as humans, but you should trust it more anyway".

    Thanks, I'll pass.

    • (Score: 2) by RamiK on Thursday October 06 2022, @11:31AM (4 children)

      by RamiK (1813) on Thursday October 06 2022, @11:31AM (#1275219)

      An AI system that's as bad as humans today can be made better tomorrow at problems that would take years of retaining or possibly even tens of thousands of years of evolution to fix in humans. e.g. Try to imagine how many driving decision involve consecutive guess attempts but Monty Hall gives us the impression it's all random when in fact we should be switching doors. An AI trained on those datasets doesn't fall into that mistake. It develops the switching strategy almost instantly and sticks to it in places humans can't even identify. Another one is how Reinforced Programming systems will probably manage to pick or possibly even develop sort algorithms better than humans in most cases simply because we're just too lazy to bother thinking through each scenario and optimizing appropriately.

      TL;DR AI systems have more potential for improvement than humans.

      --
      compiling...
      • (Score: 3, Insightful) by Freeman on Thursday October 06 2022, @04:08PM (1 child)

        by Freeman (732) on Thursday October 06 2022, @04:08PM (#1275259) Journal

        General AI is an NP-Hard problem as it were. You can train a kid to think critically and they will be able to make assumptions that are correct, where a program can't. Drinking the AI Koolaid is a very bad thing to do. AI is a nice tool and can be useful in some ways. Like a Chainsaw is more efficient than a two-man saw from the 1800s.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by RamiK on Thursday October 06 2022, @06:42PM

          by RamiK (1813) on Thursday October 06 2022, @06:42PM (#1275293)

          General AI is an NP-Hard problem as it were.

          No it's not. Well, not unless you say it in the metaphorical sense to mean hard to achieve.... But then, what's general AI has to do with AI systems like language models, reinforced programming or self-driving? Those are perfectly achievable with many of them already baked into real world products that outperform humans in many tasks.

          You can train a kid to think critically and they will be able to make assumptions that are correct, where a program can't.

          And how the fact humans can think (critically, whatever that supposed to mean...) where computers can't justifies not using computers and/or machine learning? Does the fact a human can count mean accountants shouldn't use spreadsheets?

          --
          compiling...
      • (Score: 0) by Anonymous Coward on Friday October 07 2022, @03:45AM (1 child)

        by Anonymous Coward on Friday October 07 2022, @03:45AM (#1275367)

        Found the one who distrusts other humans. If you don't trust other humans how can you trust yourself?

        • (Score: 2) by RamiK on Friday October 07 2022, @09:05PM

          by RamiK (1813) on Friday October 07 2022, @09:05PM (#1275467)

          Trust isn't binary or generic: I trust machines to do numbers and follow heuristics more than I trust humans to do them. And I trust humans to go beyond numbers and heuristics more than I trust machines.

          Regardless, whether it's machines, humans or myself, I don't trust; I verify. Does this software work? Lets run some tests. Is that doctors right? Lets take a second opinion. Do I remember this fact correctly? Let me google it...

          On top of that, to trust the machine is to trust its creator by extension. So, when you put all of it together, you realize the question itself is worded so poorly that it will take a 600 words essay just to flesh it out.

          --
          compiling...
    • (Score: 2) by helel on Thursday October 06 2022, @03:59PM (1 child)

      by helel (2949) on Thursday October 06 2022, @03:59PM (#1275253)

      I feel like your take is an intentionally bad one. "Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations" does not to me suggest that the authors think we should have blind faith in AI. Quite the opposite. They think we should place an appropriate amount of faith in its abilities, the same as we would any other tool. If someone thinks an airplane can take them to the moon they need their expectations redefined, just like someone who thinks it cannot take them to another city.

      • (Score: 2) by coolgopher on Friday October 07 2022, @12:39AM

        by coolgopher (1157) on Friday October 07 2022, @12:39AM (#1275335)

        Ah, I see. There's another way of reading what I quoted. I initially read it as an unconditional "calibrate people towards trusting AI more", rather than a "calibrate the level of trust of/towards AI". Guess I'm getting old and (more) cynical.

    • (Score: 2) by Reziac on Friday October 07 2022, @02:10AM (1 child)

      by Reziac (2489) on Friday October 07 2022, @02:10AM (#1275349) Homepage

      And as I recall from various similar studies, people who distrust other humans are also more likely to trust the government.

      Basically, trusting something that is Not People.

      --
      And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 0) by Anonymous Coward on Friday October 07 2022, @03:49AM

        by Anonymous Coward on Friday October 07 2022, @03:49AM (#1275368)

        I think you've got that bass ackwards.

  • (Score: 5, Insightful) by Rosco P. Coltrane on Thursday October 06 2022, @10:14AM (1 child)

    by Rosco P. Coltrane (4757) on Thursday October 06 2022, @10:14AM (#1275202)

    Until I encountered an automated support agent for the first time. Since then human beings don't look so bad after all...

    • (Score: 3, Insightful) by RedGreen on Thursday October 06 2022, @05:46PM

      by RedGreen (888) on Thursday October 06 2022, @05:46PM (#1275284)

      "Until I encountered an automated support agent for the first time. Since then human beings don't look so bad after all..."

      Indeed first thing I do on encountering them things is see if I can find the talk to human option because I know my issue will never be resolved by so fucking useless machine. At last with a person you have a slim hope in hell of getting it done.

      --
      "I modded down, down, down, and the flames went higher." -- Sven Olsen
  • (Score: 5, Insightful) by looorg on Thursday October 06 2022, @10:17AM (3 children)

    by looorg (578) on Thursday October 06 2022, @10:17AM (#1275203)

    That doesn't make sense. So you don't trust humans, but you trust something some of those humans programmed? Or do they believe that the AI wrote themselves in an objective and perfect manner?

    Sort of like how the "power users" doesn't trust them, or trust somewhat less, due to them believing that the AI somehow lack nuance when it comes to certain aspects of their programming. So they know, or suspect, something is wrong. There for less trust.

    • (Score: 3, Informative) by Gaaark on Thursday October 06 2022, @10:29AM (1 child)

      by Gaaark (41) on Thursday October 06 2022, @10:29AM (#1275207) Journal

      Yeah, i don't trust humans and i don't trust AI.

      Methinks the article is making generalizations: like when we ASSUME, making generalizations.....

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 3, Informative) by mhajicek on Thursday October 06 2022, @06:35PM

        by mhajicek (51) on Thursday October 06 2022, @06:35PM (#1275292)

        And I don't trust the humans who control the AI.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 4, Touché) by SomeGuy on Thursday October 06 2022, @12:16PM

      by SomeGuy (5632) on Thursday October 06 2022, @12:16PM (#1275223)

      Exactly. "AI"s are software tools written by HUMANS. They do exactly and precisely what humans tell them to do. Which, more often than not these days, is to disguise human evildoing.

      "Oh, no, upper management didn't ask for their AI robot puppet to do this evil thing, it just did it all on it's oooooowwwwwnnn." Sure.

      The article was probably written by an AI salesman or salesbot.

  • (Score: 2) by RedGreen on Thursday October 06 2022, @05:43PM

    by RedGreen (888) on Thursday October 06 2022, @05:43PM (#1275282)

    I do not trust either though my trust in people would most certainly be higher than some god damn machine.

    --
    "I modded down, down, down, and the flames went higher." -- Sven Olsen
  • (Score: 2, Touché) by Sjolfr on Thursday October 06 2022, @11:40PM

    by Sjolfr (17977) on Thursday October 06 2022, @11:40PM (#1275328)

    Someone should compare people's gullibility as it relates to trusting AI.

  • (Score: 3, Interesting) by lentilla on Friday October 07 2022, @12:39AM

    by lentilla (1770) on Friday October 07 2022, @12:39AM (#1275336)

    It's all about how your mind works, how you classify "information", how you filter incoming data and how it gets incorporated. Humans spout a great deal of crap. They are also full of wisdom and lived experience, only some of which is relevant to "me".

    So the most advanced humans take everything with a proverbial grain of salt, but learning which is the chaff and which is the grain takes loads of experience and a willingness to accept that we were in error and adapt accordingly.

    People tend to fall in a spectrum between rigid and chaotic. Who hasn't wished things behaved in a predictable manner, where it rained only in Tuesday mornings after 10am?

    My favourite example: Cooking For Engineers [cookingforengineers.com] where ingredients are measured in grams, cooking time in seconds and the exact volumetric size of required bowls is specified in advance!

    Of course, that also leads to people following their GPS navigation off the end of a non-existent bridge.

(1)