From a recent Science Reports paper:
Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model, trained and fine-tuned on a large set of hand-annotated data.
Our analysis shows that there is no evidence of the presence of "pure haters", meant as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents' community.
Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of the number of comments and time. Our results show that, coherently with Godwin's law, online debates tend to degenerate towards increasingly toxic exchanges of views.
Journal Reference:
M. Cinelli, A. Pelicon, I. Mozetič, et al. Dynamics of online hate and misinformation. [open] Sci Rep 11, 22083 (2021).
DOI: 10.1038/s41598-021-01487-w
(Score: 2) by Reziac on Friday November 19 2021, @02:48AM
"You should not examine legislation in the light of the benefits it will convey if properly administered, but in the light of the wrongs it would
do and the harm it would cause if improperly administered."
-- Lyndon B. Johnson, 36th President of the United States
.
And there is no Alkibiades to come back and save us from ourselves.