Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Hate O'Meter for Online Speech

Accepted submission by RandomFactor at 2019-12-31 19:59:20 from the COMMUNITY IDENTITY STABILITY dept.
Digital Liberty

University of Cambridge researchers are hoping to launch technology that blocks online "hate speech" [campusreform.org] similar to how an antivirus program stops malicious code.

Thanks to researchers at the University of Cambridge, the largest social media companies in the world may soon have the ability to preemptively quarantine content classified by an algorithm as “hate speech".” On October 14, 2019, researcher Stephanie Ullmann and professor Marcus Tomalin published a proposal in the Ethics and Information Technology [springer.com] journal promoting an invention that they claim could accomplish this goal without infringing on individual rights of free speech. Their proposal involves software that uses an algorithm to identify "hate speech" in much the same way an antivirus program detects malware. It would then be up to the viewer of such content to either leave it in quarantine or view it.

The basic premise is that online "hate speech" is as harmful in its way as other forms of harm (physical, emotional, financial...), and social media companies should intercept it before it can do that harm, rather than post-facto by review.

Tomalin's proposal would use a sophisticated algorithm which would evaluate not just the content itself, but also all content posted by the user to determine if a post might be classifiable as "hate speech". If not classified as potential "hate speech", the post occupies the social media feed like any regular post. If the algorithm flags it as possible "hate speech", it will then flag the post as potential hate speech, making it so that readers must opt-in to view the post. A graph [springer.com] from the proposal illustrates this process.

The alert to the reader will identify the type of "hate speech" potentially classified in the content as well as a “Hate O’Meter” to show how offensive the post is likely to be.

The goal of the researchers is to have a working prototype available in early 2020 and, assuming success and subsequent social media company adoptions, intercepting traffic in time for the 2020 elections.


Original Submission