University of Cambridge researchers are hoping to launch technology that blocks online "hate speech" similar to how an antivirus program stops malicious code.
Thanks to researchers at the University of Cambridge, the largest social media companies in the world may soon have the ability to preemptively quarantine content classified by an algorithm as "hate speech"." On October 14, 2019, researcher Stephanie Ullmann and professor Marcus Tomalin published a proposal in the Ethics and Information Technology journal promoting an invention that they claim could accomplish this goal without infringing on individual rights of free speech. Their proposal involves software that uses an algorithm to identify "hate speech" in much the same way an antivirus program detects malware. It would then be up to the viewer of such content to either leave it in quarantine or view it.
The basic premise is that online "hate speech" is as harmful in its way as other forms of harm (physical, emotional, financial...), and social media companies should intercept it before it can do that harm, rather than post-facto by review.
Tomalin's proposal would use a sophisticated algorithm which would evaluate not just the content itself, but also all content posted by the user to determine if a post might be classifiable as "hate speech". If not classified as potential "hate speech", the post occupies the social media feed like any regular post. If the algorithm flags it as possible "hate speech", it will then flag the post as potential hate speech, making it so that readers must opt-in to view the post. A graph from the proposal illustrates this process.
The alert to the reader will identify the type of "hate speech" potentially classified in the content as well as a "Hate O'Meter" to show how offensive the post is likely to be.
The goal of the researchers is to have a working prototype available in early 2020 and, assuming success and subsequent social media company adoptions, intercepting traffic in time for the 2020 elections.
(Score: 1, Troll) by Ethanol-fueled on Wednesday January 01 2020, @09:41PM (9 children)
So whose monies are they taking, then? George Soros? The Epstein Post-Humous fund for Social Justice?
(Score: 4, Touché) by Anonymous Coward on Wednesday January 01 2020, @11:22PM
Nah, from Trump's beauty pageant pedo fund.
(Score: 5, Touché) by ilPapa on Wednesday January 01 2020, @11:47PM (7 children)
Same place most foundations get their money: from people and institutions who give it willingly.
Do you have a problem with people using their money the way they want to?
You are still welcome on my lawn.
(Score: 3, Interesting) by khallow on Thursday January 02 2020, @02:32AM (3 children)
With stuff like this, follow the money tends to be interesting. It's possible that this is just a vanity project by the founder, Erck Rickmers [wikipedia.org]. But it could be the face of a German government project or a dumping ground for bribe money, Clinton-style.
It depends. Is it their money? And are they buying anything illegal with that? A hate filter/meter does sound pretty shifty to me, but it would be legal.
(Score: 0, Troll) by driverless on Thursday January 02 2020, @03:10AM (1 child)
You missed a few there. You got Erck Rickmers (whoever that is, I assume some random conspiracy-theory target), a foreign government, and the Clintons, but you missed the obligatory anti-semitism (Soros), and there's no mention of the Deep State anywhere I can see.
(Score: 1) by khallow on Thursday January 02 2020, @04:06AM
Founder of the non profit funding the research. Random in the sense that someone would have his role. Deep state is Germany and EF already got Soros. We got this.
(Score: 2) by FatPhil on Thursday January 02 2020, @11:23PM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 3, Insightful) by shortscreen on Thursday January 02 2020, @06:40AM (2 children)
Funny you should say that.
There was an infamous court ruling which asserted that "money is speech" and led to certain spending restrictions being lifted based on the premise that it wouldn't be acceptable for speech to be restricted in that manner. In the context of this story we see money being spent to promote the idea of "hate speech" and develop a censorship tool. Isn't it deliciously meta to then ask whether one should pass judgment on this use of money? The whole thing is starting to sound like a chapter from Godel Escher Bach.
(Score: 1) by khallow on Thursday January 02 2020, @01:55PM
If you're speaking of the Citizens United ruling, a key aspect was the ruling that corporate speech could not be treated differently than individual speech. Individuals were allowed to spend in such a manner.
(Score: 2) by barbara hudson on Thursday January 02 2020, @05:28PM
Everyone has an agenda - this one appears to be a combo of money (charge social media providers for the filters) and deciding who can say what in which context.
SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.