Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by mrpg on Saturday September 01 2018, @07:01AM   Printer-friendly
from the blame-humans-of-course dept.

New research has shown just how bad AI is at dealing with online trolls.

Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Unixnut on Saturday September 01 2018, @11:36AM (3 children)

    by Unixnut (5779) on Saturday September 01 2018, @11:36AM (#729199)

    > How do you filter out "lies, propaganda, and hate"

    I mean shit, forget filtering it. How do you define "lies, propaganda and hate"? Sure, some things are clear cut, but the vast majority is not. Usually what qualifies as "lies, propaganda, and hate" is whatever those in power define it is. It is the most dangerous thing to attempt to censor people, even if you think it wise to do so. Eventually when given the power to censor, those in power will use it to maintain themselves in power, at your expense if needs be.

    It is best to let everyone speak their mind, because they are thinking it whether you like it or not. Letting it out in public lets it be challenged and debated, whereas suppressing it makes those people think they are on the right path, and are being persecuted.

    Driving it underground results in those ideas festering without challenge, being self reinforced within the group, and eventually explodes on the scene when a critical mass of people starts believing it (which, because they are not allowed to speak their mind, nobody can be sure how many people actually think that way in private).

    Censoring people is just a way for those doing the censoring to stick their heads in the sand and pretend their world is as they wish it to be. Eventually reality catches up and smacks them upside the head.

    Plus I don't want to live in a world of thoughtcrime, even though it seems there is a sizable minority (even within the tech community) that desires quite such a world.

    Oh, and machines suck at filtering "lies, propaganda, and hate", because its very hard to define such things in a clear and logical manner. Saying the "sky is pink" is a lie, but it can also be a joke, or sarcasm, or it could be code word meaning something more offensive. How can an algorithm know that?

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by khallow on Saturday September 01 2018, @11:44AM

    by khallow (3766) Subscriber Badge on Saturday September 01 2018, @11:44AM (#729201) Journal
    I guess my point is that even if you magically get a tool capable of doing what you want, it's a weapon ready to be used against you. The idea fails on so many levels.
  • (Score: 5, Insightful) by bzipitidoo on Saturday September 01 2018, @01:17PM (1 child)

    by bzipitidoo (4388) on Saturday September 01 2018, @01:17PM (#729215) Journal

    This is like figuring out how to set the "evil bit". Also, Bowdlerization, named after a 19th century guy who tried to sanitize fiction. He replaced profanity with milder language, tried to edit out sexual innuendos, subversive ideas, and so on, and ended up ruining the story. Some of the TV censorship they used to try in 1950s and 1960s America is just nuts. The Ed Sullivan Show censored musicians, so, for instance The Rolling Stones "Let's Spend The Night Together" was changed to "Let's Spend Some Time Together". Now most people appreciate that trying to hide the existence of sex from teenagers doesn't work, doesn't fool them for long, and often doesn't end well. Even dictionaries practiced censorship. I had a 1948 Websters that defined "masturbate" with just 2 words: "self pollution". (That dictionary also had an entry for "yellow peril". Yeah, it was extremely racist.) I think also that squeamishness about digestion has lessened, and a good thing too, as related medical problems often went untreated and even unrecognized thanks to ignorance on that subject. I have read that there was a lively debate on Wikipedia over whether to include a picture of human poop on the page about feces, finally resolved in favor of having the pictures.

    Other terrible uses of censorship are to cater to racism and other forms of discrimination, and to suppress dissent. Star Trek (the original series) was the first to have a scene in which a white and a black kissed. The first time I saw it, I had no idea that scene was such a big deal. But Star Trek did a lot more than that. The censors also didn't like criticism of the Vietnam War, and Star Trek worked that in too, and got it past the censors by distracting them with sex. That's the chief reason why the female crew members in Star Trek had such short, short uniforms. Of course it was also because sex does sell, but mainly it was a calculated distraction not for the audience, but for the censors so that they'd be so busy censoring out the boatloads of sex that they missed the veiled references to the stupidities of the Vietnam War.

    Conservatives try to get messages across to liberals, but the liberals aren't listening too well, very aggravatingly dismissing the conservatives as idiots and all their thinking as stupid. (Mind you, the contempt and refusal to acknowledge facts is even thicker in the other direction. Further, the media loves to fan the flames, to make "good copy".) That message is that life has its ugly sides. Conservatives are particularly focused on the fact that life is highly competitive, and see liberals as fools for not appreciating that enough. They have good reason to view outsiders as foes looking to compete with us for limited resources, because they'd do it themselves to those outsiders. At the least, they want to maintain a show of strength so those others don't start to get certain ideas along those lines. Such messages are particularly vulnerable to being thought bad and deserving of censorship.

    • (Score: 1) by khallow on Saturday September 01 2018, @09:38PM

      by khallow (3766) Subscriber Badge on Saturday September 01 2018, @09:38PM (#729334) Journal

      They have good reason to view outsiders as foes looking to compete with us for limited resources, because they'd do it themselves to those outsiders.

      I suppose there is a modest amount of projection there. But really this sort of automated censorship is so bad that one doesn't need to have a conservative viewpoint to see the problems. So much of the argument for this sort of thing is "A is bad. B solves A. Thus, we should do B." without regard for whether either of the first two statements is correct (though I grant the stereotypical hate speech is bad in at least a couple of relevant ways in this case) nor considering the cost of B.