University of Cambridge researchers are hoping to launch technology that blocks online "hate speech" similar to how an antivirus program stops malicious code.
Thanks to researchers at the University of Cambridge, the largest social media companies in the world may soon have the ability to preemptively quarantine content classified by an algorithm as "hate speech"." On October 14, 2019, researcher Stephanie Ullmann and professor Marcus Tomalin published a proposal in the Ethics and Information Technology journal promoting an invention that they claim could accomplish this goal without infringing on individual rights of free speech. Their proposal involves software that uses an algorithm to identify "hate speech" in much the same way an antivirus program detects malware. It would then be up to the viewer of such content to either leave it in quarantine or view it.
The basic premise is that online "hate speech" is as harmful in its way as other forms of harm (physical, emotional, financial...), and social media companies should intercept it before it can do that harm, rather than post-facto by review.
Tomalin's proposal would use a sophisticated algorithm which would evaluate not just the content itself, but also all content posted by the user to determine if a post might be classifiable as "hate speech". If not classified as potential "hate speech", the post occupies the social media feed like any regular post. If the algorithm flags it as possible "hate speech", it will then flag the post as potential hate speech, making it so that readers must opt-in to view the post. A graph from the proposal illustrates this process.
The alert to the reader will identify the type of "hate speech" potentially classified in the content as well as a "Hate O'Meter" to show how offensive the post is likely to be.
The goal of the researchers is to have a working prototype available in early 2020 and, assuming success and subsequent social media company adoptions, intercepting traffic in time for the 2020 elections.
(Score: 0, Troll) by Anonymous Coward on Thursday January 02 2020, @08:05AM
From my reading of it, GDPR won't technically apply, the scheme looks at the content, not the poster of the content, and decides if it offendeth whatever sensibilities these crypto-fascists have biased..sorry, trained their system with, and if the e-RMVP says 'ja' then your content passes, 'nein' it gets blackholed....
Nothing along the lines of 'personal data' is stored, per se, as they wouldn''t want to subject themselves to any inconvenient legal scrutiny, but I suppose if your missives flag as 'hate speech' on part of their content, you could argue that this is procsssing of personal data if the rest of it is then mined for new data to further train their AI, but legally you'd be very hard put to prove which chunk of their neural net/other code fuckwittery personally identifies you alone.
Ok, so the crowd who're spouting this shit are based in Cambridge, which is probably why you've thrown in a snide remark about Brexit. I would point out that the Foundation behind this is the brainchild of a German..and the muppets employed by said foundation to come up with this scheiße are probably (by their profiles on the foundations webshite) all conformist europhiles, no doubt all sorely vexed by the fact that the normally sheepish population of England didn't swallow all the EU funded/orchestrated pro-EU propaganda but went for the anti-EU 'hate speech'', hence the need for this jolly little project..