From a recent Science Reports paper:
Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model, trained and fine-tuned on a large set of hand-annotated data.
Our analysis shows that there is no evidence of the presence of "pure haters", meant as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents' community.
Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of the number of comments and time. Our results show that, coherently with Godwin's law, online debates tend to degenerate towards increasingly toxic exchanges of views.
Journal Reference:
M. Cinelli, A. Pelicon, I. Mozetič, et al. Dynamics of online hate and misinformation. [open] Sci Rep 11, 22083 (2021).
DOI: 10.1038/s41598-021-01487-w
(Score: 2) by darkfeline on Thursday November 18 2021, @09:13PM (2 children)
Dismissiveness != toxicity. Your argument falls flat. If it is simply a case of dealing with imbeciles, you would just ignore them.
Join the SDF Public Access UNIX System today!
(Score: 2) by sjames on Thursday November 18 2021, @09:45PM
Wow! That didn't sound at all desperate!>
(Score: 0) by Anonymous Coward on Friday November 19 2021, @04:59PM
I mean, it would be great to ignore imbeciles, but the problem is they keep getting elected, making them much more dangerous to ignore.