Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Saturday September 01 2018, @07:01AM   Printer-friendly
from the blame-humans-of-course dept.

New research has shown just how bad AI is at dealing with online trolls.

Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by martyb on Saturday September 01 2018, @12:16PM (5 children)

    by martyb (76) Subscriber Badge on Saturday September 01 2018, @12:16PM (#729206) Journal

    I have to agree, the slash moderation works as well as is feasible I think. As you mentioned, we're not immune to determined foes, but eternal vigilance handles that pretty well too. Props to everyone who moderates and those who wield the out right filters on the backend.

    I wholeheartedly agree. Anything that I can wield against a thought that I deem "wrong" can be used against something that I post that someone else deems is "wrong". Who decides, and how?

    I offer these two quotes for consideration:

    (1) "I disapprove of what you say, but I will defend to the death your right to say it."
    -- Evelyn Beatrice Hall (but there is some uncertainity on the attribution) [quoteinvestigator.com].

    (2) "The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all."
    -- H. L. Mencken [quotationspage.com].

    The moderation system used by SoylentNews, if enough people make the effort to moderate, does a pretty good job of having the dregs fall to the bottom whilst letting the cream rise to the top. Even comments moderated "spam" can be read if you set your threshold to "-1".

    Many thanks to those who make the effort to register here, login, and do what they can to boost the signal/noise ratio.

    --
    Wit is intellect, dancing.
    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Interesting=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by RandomFactor on Saturday September 01 2018, @05:47PM

    by RandomFactor (3682) Subscriber Badge on Saturday September 01 2018, @05:47PM (#729279) Journal

    Agreed. It's better than average. OTOH there's also some, ...uhhh... slightly non-mainstream but still interesting posters here that would not normally see the light of day.

    It's funny, there's one site I frequent where I have to explicitly look for the maximum downmodded posts (and +1 them) because the prevailing group think is so diametrically opposed to my views that any post I might actually consider insightful or interesting tends to be brutalized by the time I read it.

    --
    В «Правде» нет известий, в «Известиях» нет правды
  • (Score: 1) by anubi on Sunday September 02 2018, @05:33AM (3 children)

    by anubi (2828) on Sunday September 02 2018, @05:33AM (#729425) Journal

    I get the idea that Soylent News is also a testbed for this kind of thing, trying stuff out on a smaller scale among technical professionals first.

    This is my favorite hangout to get whiffs of techie stuff and insights from what I believe to be the world experts in the field... and I am talking about people who actually are familiar with the technical, not the business, end of things. You guys come up with more leads for me - things I never knew even existed - until one of you guys mention it.

    I see this as a specialty site, very similar to TheOilDrum.com ( now on archive status ) was for us oil exploration guys ( which is where I came from ).

    It was very important for each of us to be able to submit stuff to the group for comment. And I believe that kind of thing is even more important here, given how critical our computational and network infrastructure is, yet we have hidden agendas from special interests that try to block our understanding of how things work, especially covert backdoors, when we all know that obscurity is NOT security.

    Having backdoors in our OS is about like having a detailed plan for building an atomic weapon out there, just waiting to fall into the wrong hands. One slipup, and the wrong party will have the power to "upgrade" our whole computational infrastructure to a brick. And we really need to have an active community to keep that from happening. While we may not have the authority needed to keep business executives from doing incredibly stupid things, we can at least know what's apt to happen and prepare for it within the technical community.

    From what I see, this system of community oversight/moderation works extremely well. Kinda like an integrator getting the noise out of a system. Or statistics to get to a more accurate estimation than any of its individual inputs. Like already noted, having a system like this requires a substantial number of us participating in the discussions and moderations, no different than a statistical study requires many samples to get decent results.

    Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 2) by The Mighty Buzzard on Sunday September 02 2018, @10:35AM (1 child)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Sunday September 02 2018, @10:35AM (#729462) Homepage Journal

      Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

      Well, first you have to figure out how to divide them up when they share the same body. Chainsaws are pretty messy. Axes too.

      --
      My rights don't end where your fear begins.
      • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @01:15PM

        by Anonymous Coward on Sunday September 02 2018, @01:15PM (#729495)

        Look, I'm not sure that will work.
        It's worth a try though.
        There's a jerk at my work we could experiment on?

    • (Score: 3, Interesting) by martyb on Monday September 03 2018, @04:28AM

      by martyb (76) Subscriber Badge on Monday September 03 2018, @04:28AM (#729749) Journal

      Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

      1. Flag nicks that you perceive to be "Professional" as "friends".
      2. Flag nicks that you perceive to be "kids and jokers" as "foes".
      3. Adjust your preferences and assign:
        • a "+2" adjustment to friend's moderations.
        • a "-6" adjustment to foe's moderations.

      What it does: The actual moderation is unchanged. The resulting apparent moderation can be filtered by adjusting your Threshold and Breakthrough preferences. So, if you set both of those to "0", then whenever a foe posts a comment, the most you should see is just the comment title. OTOH, when a friend posts a comment, even if moderated into oblivion (actual moderation -1) it will still rise above those limits and you will always see their comments.

      NB: That is how it is supposed to work. I only recently remembered this capability in the system and have not tested it. I do not anticipate any problems, but if you DO find a problem please let us know! File a bug, send an email to admin@soylentnews.org, or raise it with someone on staff on IRC.

      Hope that helps!

      --
      Wit is intellect, dancing.