Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by mrpg on Saturday September 01 2018, @07:01AM   Printer-friendly
from the blame-humans-of-course dept.

New research has shown just how bad AI is at dealing with online trolls.

Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by martyb on Monday September 03 2018, @04:28AM

    by martyb (76) Subscriber Badge on Monday September 03 2018, @04:28AM (#729749) Journal

    Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

    1. Flag nicks that you perceive to be "Professional" as "friends".
    2. Flag nicks that you perceive to be "kids and jokers" as "foes".
    3. Adjust your preferences and assign:
      • a "+2" adjustment to friend's moderations.
      • a "-6" adjustment to foe's moderations.

    What it does: The actual moderation is unchanged. The resulting apparent moderation can be filtered by adjusting your Threshold and Breakthrough preferences. So, if you set both of those to "0", then whenever a foe posts a comment, the most you should see is just the comment title. OTOH, when a friend posts a comment, even if moderated into oblivion (actual moderation -1) it will still rise above those limits and you will always see their comments.

    NB: That is how it is supposed to work. I only recently remembered this capability in the system and have not tested it. I do not anticipate any problems, but if you DO find a problem please let us know! File a bug, send an email to admin@soylentnews.org, or raise it with someone on staff on IRC.

    Hope that helps!

    --
    Wit is intellect, dancing.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3