Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Wednesday July 25 2018, @12:40PM   Printer-friendly
from the Oh-yeah?-Yeah! dept.

Averting Toxic Chats: Computer Model Predicts When Online Conversations Turn Sour

The internet offers the potential for constructive dialogue and cooperation, but online conversations too often degenerate into personal attacks. In hopes that those attacks can be averted, researchers have created a model to predict which civil conversations might take a turn and derail.

After analyzing hundreds of exchanges between Wikipedia editors, the researchers developed a computer program that scans for warning signs in the language used by participants at the start of a conversation -- such as repeated, direct questioning or use of the word "you" -- to predict which initially civil conversations would go awry.

Early exchanges that included greetings, expressions of gratitude, hedges such as "it seems," and the words "I" and "we" were more likely to remain civil, the study found.

"We, as humans, have an intuition of whether a conversation is about to go awry, but it's often just a suspicion. We can't do it 100 percent of the time. We wonder if we can build systems to replicate or even go beyond this intuition," Danescu-Niculescu-Mizil[*] said.

The computer model, which also considered Google's Perspective, a machine-learning tool for evaluating "toxicity," was correct around 65 percent of the time. Humans guessed correctly 72 percent of the time.

[...] The study analyzed 1,270 conversations that began civilly but degenerated into personal attacks, culled from 50 million conversations across 16 million Wikipedia "talk" pages, where editors discuss articles or other issues. They examined exchanges in pairs, comparing each conversation that ended badly with one that succeeded on the same topic, so the results weren't skewed by sensitive subject matter such as politics.

[*] Cristian Danescu-Niculescu-Mizil: assistant professor of information science and co-author of the paper Conversations Gone Awry: Detecting Early Signs of Conversational Failure. (pdf)

The technique sounds useful for non-internet conversations, too... is there an app for that?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Thexalon on Wednesday July 25 2018, @05:58PM

    by Thexalon (636) on Wednesday July 25 2018, @05:58PM (#712527)

    One "side" is not very civil; they have no moral qualms with silencing someone who disagrees.

    That's total nonsense. I'm about as left-wing as they come, the sort who thinks Bernie Sanders doesn't go far enough and descended from a guy who was blacklisted in the 1950's for his communist connections. My karma is 50 most of the time. There are a bunch of right-wing folks whose karma is also 50 most of the time.

    Since you're an AC, I don't know what your actual posting history looks like, but when I've read at -1 threshold, I've found that those comments modded into oblivion completely deserved it for non-political reasons. If you want to be modded up, I suggest not posting things that aren't stupid and are more interesting than repeating slogans of whatever affiliation you've decided to give yourself.

    Here's an example of the kind of post that I would disagree with and probably reply to, but would never consider modding down:
    "While this ballot initiative sounds like a great idea, it's going to cause a budget problem because there's a $10 billion price tag and no obvious way to make it up."

    Here's an example of the kind of post that I would agree with, and would mod into oblivion:
    "Trump is T3H SuX00RR5 L0L!"

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2