Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Wednesday July 25 2018, @12:40PM   Printer-friendly
from the Oh-yeah?-Yeah! dept.

Averting Toxic Chats: Computer Model Predicts When Online Conversations Turn Sour

The internet offers the potential for constructive dialogue and cooperation, but online conversations too often degenerate into personal attacks. In hopes that those attacks can be averted, researchers have created a model to predict which civil conversations might take a turn and derail.

After analyzing hundreds of exchanges between Wikipedia editors, the researchers developed a computer program that scans for warning signs in the language used by participants at the start of a conversation -- such as repeated, direct questioning or use of the word "you" -- to predict which initially civil conversations would go awry.

Early exchanges that included greetings, expressions of gratitude, hedges such as "it seems," and the words "I" and "we" were more likely to remain civil, the study found.

"We, as humans, have an intuition of whether a conversation is about to go awry, but it's often just a suspicion. We can't do it 100 percent of the time. We wonder if we can build systems to replicate or even go beyond this intuition," Danescu-Niculescu-Mizil[*] said.

The computer model, which also considered Google's Perspective, a machine-learning tool for evaluating "toxicity," was correct around 65 percent of the time. Humans guessed correctly 72 percent of the time.

[...] The study analyzed 1,270 conversations that began civilly but degenerated into personal attacks, culled from 50 million conversations across 16 million Wikipedia "talk" pages, where editors discuss articles or other issues. They examined exchanges in pairs, comparing each conversation that ended badly with one that succeeded on the same topic, so the results weren't skewed by sensitive subject matter such as politics.

[*] Cristian Danescu-Niculescu-Mizil: assistant professor of information science and co-author of the paper Conversations Gone Awry: Detecting Early Signs of Conversational Failure. (pdf)

The technique sounds useful for non-internet conversations, too... is there an app for that?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by looorg on Wednesday July 25 2018, @01:01PM (6 children)

    by looorg (578) on Wednesday July 25 2018, @01:01PM (#712299)

    The computer model, which also considered Google's Perspective, a machine-learning tool for evaluating "toxicity," was correct around 65 percent of the time. Humans guessed correctly 72 percent of the time.

    Google has a perspective? Really? What exactly is Google's perspective?

    There apparently was a quiz in the text one could try and see if one was sort of like the other humans, as good as Google or just guessing wildly.

    In this task, you will be shown 15 pairs of conversations. For each conversation, you will only get to see the first two comments in the conversation. Your job is to guess, based on these conversation starters, which conversation is more likely to eventually lead to a personal attack from one of the two initial users.

    http://awry.infosci.cornell.edu/ [cornell.edu]

    That all said and done. I took the test. Apparently I'm worse then Google at detecting potential toxic conversations, not by much but still just below.
    OK so I took the test ... I'm worse the Google at detecting when things about to go toxic apparently ...

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Informative) by looorg on Wednesday July 25 2018, @01:04PM

    by looorg (578) on Wednesday July 25 2018, @01:04PM (#712300)

    I'm also apparently shit at editing my posts before hitting submit ...

  • (Score: 2) by takyon on Wednesday July 25 2018, @01:17PM (3 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday July 25 2018, @01:17PM (#712306) Journal

    I got a whopping 7 of 12. Who would have thought that begging for a "barnstar" on Wikipedia would lead to a personal attack?

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Wednesday July 25 2018, @02:49PM (1 child)

      by Anonymous Coward on Wednesday July 25 2018, @02:49PM (#712382)

      It's wikipedia. Toxicity is expected.

    • (Score: 2) by looorg on Wednesday July 25 2018, @03:48PM

      by looorg (578) on Wednesday July 25 2018, @03:48PM (#712428)

      That was the same score I got, at first I was somewhat curios that it said out of 12 when the initial information said 15 questions. But I think the first 3 didn't count; it said warm-up on them or something such next to the question number. I see it as at least being one better then just pure coin-flipping. On the other hand the difference between 7/12 and 8/12 is like 8% or so. Which means that Google was probably just two better then the coinflip and the humans was just one better then Google. So thinking of it like that there is really a minor difference between the two.

  • (Score: 2, Interesting) by Anonymous Coward on Wednesday July 25 2018, @01:28PM

    by Anonymous Coward on Wednesday July 25 2018, @01:28PM (#712313)

    Google has a perspective? Really? What exactly is Google's perspective?

    Remember Tay?

    Perspectives is an AI whose idea of "toxicity" was defined by Muslim Brotherhood / Democratic Party operatives Wajahat Ali and Shahed Amanullah [archive.is] with the assistance of Randi Harper, Anita Sarkeesian, Zoe Quinn, etc. It's a long thread, so search for Google and you will see them. This is being done at the request of the state of Qatar and its American operatives. [brookings.edu] All of the big corporate media have signed up to use it to clean out their comments sections.