Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday January 01 2020, @07:58PM   Printer-friendly
from the false-positives dept.

University of Cambridge researchers are hoping to launch technology that blocks online "hate speech" similar to how an antivirus program stops malicious code.

Thanks to researchers at the University of Cambridge, the largest social media companies in the world may soon have the ability to preemptively quarantine content classified by an algorithm as "hate speech"." On October 14, 2019, researcher Stephanie Ullmann and professor Marcus Tomalin published a proposal in the Ethics and Information Technology journal promoting an invention that they claim could accomplish this goal without infringing on individual rights of free speech. Their proposal involves software that uses an algorithm to identify "hate speech" in much the same way an antivirus program detects malware. It would then be up to the viewer of such content to either leave it in quarantine or view it.

The basic premise is that online "hate speech" is as harmful in its way as other forms of harm (physical, emotional, financial...), and social media companies should intercept it before it can do that harm, rather than post-facto by review.

Tomalin's proposal would use a sophisticated algorithm which would evaluate not just the content itself, but also all content posted by the user to determine if a post might be classifiable as "hate speech". If not classified as potential "hate speech", the post occupies the social media feed like any regular post. If the algorithm flags it as possible "hate speech", it will then flag the post as potential hate speech, making it so that readers must opt-in to view the post. A graph from the proposal illustrates this process.

The alert to the reader will identify the type of "hate speech" potentially classified in the content as well as a "Hate O'Meter" to show how offensive the post is likely to be.

The goal of the researchers is to have a working prototype available in early 2020 and, assuming success and subsequent social media company adoptions, intercepting traffic in time for the 2020 elections.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by RandomFactor on Wednesday January 01 2020, @08:40PM (2 children)

    by RandomFactor (3682) Subscriber Badge on Wednesday January 01 2020, @08:40PM (#938353) Journal

    Tomalin's proposal would use a sophisticated algorithm which would evaluate not just the content itself, but also all content posted by the user to determine if a post might be classifiable as "hate speech".

    *cough* Social Credit System [wikipedia.org] *cough*

    --
    В «Правде» нет известий, в «Известиях» нет правды
    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  

    Total Score:   4  
  • (Score: 0, Flamebait) by Ethanol-fueled on Wednesday January 01 2020, @09:18PM (1 child)

    by Ethanol-fueled (2792) on Wednesday January 01 2020, @09:18PM (#938366) Homepage

    " Their proposal involves software that uses an algorithm to identify 'hate speech' in much the same way an antivirus program detects malware. "

    So what, virus "definitions" pulled straight from the ADL and SPLC databases, that consider a cartoon frog and hand symbol which traditionally meant "OK" to be hate symbols? It doesn't end there, you look at all their definitions and "hate symbols" and you'll be wanting those schizos to start taking their meds, and fast!

    How about virus "heuristics?" Such as disagreeing with one of the algorithm's approved Jew curators during discussion? Or perhaps having a writing style similar to, I dunno, whatever writing styles have been curated via a proprietary process they won't tell you about from "racists" you had no idea even existed?

    if comment.contains("Fellow" || "Jew") comment.flag(HATE_SPEECH);
    else if comment.contains("Jew") { if username.contains("Berg" || "Stein" || "Wicz" ) comment.flag(NOT_HATE_SPEECH);}
    else comment.flag(HATE_SPEECH);

    • (Score: 1) by RandomFactor on Thursday January 02 2020, @04:56AM

      by RandomFactor (3682) Subscriber Badge on Thursday January 02 2020, @04:56AM (#938498) Journal

      It is a given that input from various anti-hate activist organizations would be leveraged.
       
      What your are referring to is known as the 'quiet part' and is not intended to be said out loud - that only those sources will be used which align politically with the social media companies' socio-political world view.

      --
      В «Правде» нет известий, в «Известиях» нет правды