Researchers propose a new approach for dismantling online hate networks
How do you get rid of hate speech on social platforms? Until now, companies have generally tried two approaches. One is to ban individual users who are caught posting abuse; the other is to ban the large pages and groups where people who practice hate speech organize and promote their noxious views.
But what if this approach is counterproductive? That's the argument in an intriguing new paper out today in Nature from Neil Johnson, a professor of physics at George Washington University, and researchers at GW and the University of Miami. The paper, "Hidden resilience and adaptive dynamics of the global online hate ecology," explores how hate groups organize on Facebook and Russian social network VKontakte — and how they resurrect themselves after platforms ban them.
As Noemi Derzsy writes in her summary in Nature:
Johnson et al. show that online hate groups are organized in highly resilient clusters. The users in these clusters are not geographically localized, but are globally interconnected by 'highways' that facilitate the spread of online hate across different countries, continents and languages. When these clusters are attacked — for example, when hate groups are removed by social-media platform administrators (Fig. 1) — the clusters rapidly rewire and repair themselves, and strong bonds are made between clusters, formed by users shared between them, analogous to covalent chemical bonds. In some cases, two or more small clusters can even merge to form a large cluster, in a process the authors liken to the fusion of two atomic nuclei. Using their mathematical model, the authors demonstrated that banning hate content on a single platform aggravates online hate ecosystems and promotes the creation of clusters that are not detectable by platform policing (which the authors call 'dark pools'), where hate content can thrive unchecked.
[...] The researchers advocate a four-step approach to reduce the influence of hate networks.
- Identify smaller, more isolated clusters of hate speech and ban those users instead.
- Instead of wiping out entire small clusters, ban small samples from each cluster at random. This would theoretically weaken the cluster over time without inflaming the entire hive.
- Recruit users opposed to hate speech to engage with members of the larger hate clusters directly. (The authors explain: "In our data, some white supremacists call for a unified Europe under a Hitler-like regime, and others oppose a united Europe. Similar in-fighting exists between hate-clusters of the KKK movement. Adding a third population in a pre-engineered format then allows the hate-cluster extinction time to be manipulated globally.)
- Identify hate groups with competing views and pit them against one another, in an effort to sow doubt in the minds of participants.
Hidden resilience and adaptive dynamics of the global online hate ecology[$], Nature (DOI: 10.1038/s41586-019-1494-7)
(Score: 0) by Anonymous Coward on Tuesday August 27 2019, @05:39PM (2 children)
i don't think we need centralized moderation. i think we need decentralized clients with user accounts with reputation. then people can down mod/block accounts that post cp or other shit people don't want to see. all content needs to be stored encrypted and if you haven't chosen to open it, it stays encrypted. similar to storj.
(Score: 3, Interesting) by janrinok on Tuesday August 27 2019, @05:55PM (1 child)
Except you can't get a reliable reputation if everyone posts as AC. If you use an ID number or something similar you are no longer truly anonymous. People might not be able to associate you with an ID, until they have access to your computer. Frost is the nearest I've seen in this regard. If you just want to transfer data anonymously you can, if you want to go looking for something illegal, you can do that also, but you are unlikely to stumble upon it unless you are looking.
The other problem is that, like TOR and other 'secure' systems, the traffic is identifiable unless you set it up in a specific way. Nobody can see what the data means, but they are able to see that you are using a system designed to hide such information. To be popular it has to be easy to use straight 'out of the box'. I suspect some governments would simply make the use of secure communications illegal, but that is a different discussion.
(Score: 1) by fustakrakich on Tuesday August 27 2019, @06:34PM
In this gossipy world, "reputation" is the worst possible way to judge people, as if that should even be allowed, much less encouraged.
I suspect some governments would simply make the use of secure communications illegal
The ISP is the weak link there. But as far as breaking down the walls, the "hate groups'" example (ideology notwithstanding) of staying connected is the one to follow.
La politica e i criminali sono la stessa cosa..