Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Saturday September 01 2018, @07:01AM   Printer-friendly
from the blame-humans-of-course dept.

New research has shown just how bad AI is at dealing with online trolls.

Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Thexalon on Saturday September 01 2018, @04:42PM

    by Thexalon (636) on Saturday September 01 2018, @04:42PM (#729260)

    you think it's ok to spread lies, propaganda and hate?

    Who gets to decide what's a lie, what's propaganda, and what's hate? It's not the computer: The AI only does what it's told to do, and is easily confused. As an example of how easy it is to confuse an AI: Should the sequence of characters " ZOG " be censored? Probably you think yes if they're referring to an anti-semitic acronym. But that could also refer to a sports organization [zogsports.com], or a scene from Babylon 5 [youtube.com], or probably a few other things, and an AI is going to have a tough time figuring out which one applies.

    Enough that it damages society and the culture we live in?

    What exactly do you mean by this? Do you mean "Somebody I didn't like got elected to public office?"

    Enough that some people start believing it, and believing when you tell them to only trust you?

    Suckers have always been around.

    There are ways of remedying this, but they don't involve censorship. Instead, what you have to do is teach critical thinking skills so that people are better at spotting lies and propaganda. You teach them all about logical fallacies, the techniques of propaganda, how to go about fact-checking for real, and of course give them a basic skepticism about the information they get.

    The powers-that-be generally don't like this solution, because they know that if they do this they will now be faced with a population that no longer believes *their* lies and propaganda: "Everything goes better with Coca-Cola." "The war effort is to protect you." "$POLITICIAN is your friend." "The police are there to help you." "Your car needs to be bigger, faster, louder, stronger, more manly." "This pill will fix everything." "You need more stuff." "Stand when they sing this song because freedom." "If your nearest professional sports team wins, that matters to you." "This superfood will cure cancer." You get the idea.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2