Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by mrpg on Saturday September 01 2018, @07:01AM   Printer-friendly
from the blame-humans-of-course dept.

New research has shown just how bad AI is at dealing with online trolls.

Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: -1, Spam) by Anonymous Coward on Saturday September 01 2018, @07:17AM

    by Anonymous Coward on Saturday September 01 2018, @07:17AM (#729135)

    It was a sound that could bring a smile to almost anyone's face. It was a sound that was truly precious to Wallmit, who reminisced about the past whenever he heard it. The laughing of children.

    Children laughed whenever they saw Wallmit. They giggled, chortled, and chuckled. Wallmit did not know why, but he enjoyed every minute of it. He enjoyed it so much, in fact, that he strived to be around children as much as possible. Whether it was at restaurants, at grocery stores, or now, at a park, he was always with children. Always.

    They laughed. The children Wallmit was playing with laughed and laughed and laughed some more. Wallmit laughed with them. Together, they laughed until the sun went down, which marked the end of their great journey together. Wallmit departed and waved goodbye to his little friends. However, they did not wave back.

    No, they couldn't wave back; they were too busy laughing. Yes, the value of men's rights had become all too obvious after meeting Wallmit. They realized that with every fiber of their brutalized, naked, and motionless bodies.

  • (Score: 2, Insightful) by Anonymous Coward on Saturday September 01 2018, @07:42AM (3 children)

    by Anonymous Coward on Saturday September 01 2018, @07:42AM (#729144)

    They don't want to block fake news. They want to support fake news. (classic example: CNN edited video to suggest that Trump rudely dumped food when feeding koi with the Japanese prime minister) Blocking fake news is easy: CNN, etc.

    They want to block crimethink. If you dare to question the globalist left, you are to be blacklisted and shadowbanned and worse. The UK will even send cops to get you; they seem to have taken "1984" as a plan to implement.

  • (Score: 5, Insightful) by anubi on Saturday September 01 2018, @07:47AM (21 children)

    by anubi (2828) on Saturday September 01 2018, @07:47AM (#729146) Journal

    Some stuff is easy for people, but difficult for machines. Like those "captcha" thingies about flagging all photos containing a sign.

    It wasn't too long ago we had an interesting show of wits with DN and our TMB over how to sneak in trash posts. My own take on it was DN made it pretty clear that no matter what one could do, a determined DN would get it in.

    Looks to me that we have about the most efficient means of censorship ( if you can call it that ) on this very website. Crowdsourced. If any of us are so convinced that a post is bad, and we are willing to take the hit for abusing it, we can self-censor, but not delete. Its still there. Kinda like removing dog poo off the sidewalk and putting it in the can is not considered theft. If someone really wanted to see the dog poo, its still there. In the can. But someone did everyone else a favor by getting it off the sidewalk. Its a personal judgement call over what constitutes dog poo and what constitutes something that should be left alone.

    As many foreign countries have found out, its easy to pass law about censorship... but trying to actually implement it is a horse of a different color.

    The English language has gotten almost like Unicode is to plain ascii... now if these posts were in plain ascii, it would be much easier, but by the time one throws in unicode and all the glyphs that look alike, the array of permutations usable for bypassing censorship criteria grows to unmanageable levels.

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 2) by coolgopher on Saturday September 01 2018, @07:55AM (12 children)

      by coolgopher (1157) on Saturday September 01 2018, @07:55AM (#729147)

      I have to agree, the slash moderation works as well as is feasible I think. As you mentioned, we're not immune to determined foes, but eternal vigilance handles that pretty well too. Props to everyone who moderates and those who wield the out right filters on the backend.

      • (Score: 4, Informative) by janrinok on Saturday September 01 2018, @10:05AM (3 children)

        by janrinok (52) Subscriber Badge on Saturday September 01 2018, @10:05AM (#729186) Journal

        Props to everyone who moderates

        Amen to that brother. Unfortunately, far too many simply don't bother to moderate which works against the interests of the majority.

        We also suffer from other forms of abuse. Some will post something extreme as an AC and then log on as a member and moderate it up. They might actually do this using several different accounts and so their shit-post suddenly appears at a score of 3 or 4. That then requires 3 or 4 honest moderators to each down mod it to get it where it belongs. Another technique is to go back to stories that were published a week or more ago and start moderating all the interesting, informative or insightful posts down, again using 1 or more accounts. I'm not sure what the benefit is of this latter action but it is apparent in several stories that I have watched recently.

        Moderation is what makes this site work. Without sensible moderation the trolls can do as they wish.

        • (Score: 5, Informative) by Thexalon on Saturday September 01 2018, @04:12PM (2 children)

          by Thexalon (636) on Saturday September 01 2018, @04:12PM (#729254)

          I do some modding, but rarely use all my points, so using some of them to counteract an obvious bad act isn't something I find to be a difficulty.

          Another technique is to go back to stories that were published a week or more ago and start moderating all the interesting, informative or insightful posts down, again using 1 or more accounts. I'm not sure what the benefit is of this latter action but it is apparent in several stories that I have watched recently.

          The purpose of this is to cut the target's karma enough that they go from posts starting at +2 to starting at +1, and it takes them a while to get back to +2, acting as a sort of attempted censorship. There are flaws in this move, though:
          1. People who post positive-mod-worthy stuff continue to post positive-mod-worthy stuff, so their karma tends to bounce back pretty easily.
          2. Someone might notice that a regular suddenly dropped to +1 posts, go back and correct the problem.
          3. The positive contributors to this site don't care enough about karma to fear this sort of thing, so it doesn't work as a "punishment" for doing whatever prompted the attack.
          I know something about this because I've been targeted a few times over the years.

          --
          The only thing that stops a bad guy with a compiler is a good guy with a compiler.
          • (Score: 2) by janrinok on Saturday September 01 2018, @04:18PM

            by janrinok (52) Subscriber Badge on Saturday September 01 2018, @04:18PM (#729255) Journal

            Thanks for the explanation.

          • (Score: -1, Troll) by Anonymous Coward on Sunday September 02 2018, @01:39PM

            by Anonymous Coward on Sunday September 02 2018, @01:39PM (#729502)

            Words to live by.
            Around here we have lost. For now, anyway. It is not a crime, but you can be sued for offending someone [abc.net.au] which goes against the local culture. They tried to remove the words offending, insulting and humiliating from section 18C of the act but failed. This caused a lot of flurry in the local chapters of the child thighing child marrying wife beating iownyouwithmyrighthand rapist community who were offended that their medieval way of life could be compromised specifically with the worry that they may be outed in public by members of the public for their horrible behaviour. Any one in this age and day who does not force children to `marry` old men or divert funds from food scams to overseas militants or rape their daughters or cut the genitals of children or inflict mental torture on children or enslave females or end lives of those who leave the society or inflict punishments on people as though they are above the law of the land or engage in actions designed to incite terror or hurt people in the community they moved in to or inappropriately touch children in public or rape females in public swimming pools or inflict their insanity upon others or attack the community they are claiming respite from after travelling across the globe to get away from war or use vehicles to kill people in public or throw acid in the faces of people in public in specific reference to those who are female and who have not covered up skin head to toe who may be claimed by insane people that by doing so are proving themselves to be immoral and tempting males to rape them or breaking into peoples homes to steal from them and hurt them really anything like this should not be concerned about the removal of these words from this part of the law.

            Though, if there is someone who is like the above, and knowing that no specific person or persons have been named then perhaps they should be concerned about someone walking up to them in public to loudly ask WHY DID YOU CUT YOUR DAUGHTERS CLITORIS

      • (Score: 5, Insightful) by martyb on Saturday September 01 2018, @12:16PM (5 children)

        by martyb (76) Subscriber Badge on Saturday September 01 2018, @12:16PM (#729206) Journal

        I have to agree, the slash moderation works as well as is feasible I think. As you mentioned, we're not immune to determined foes, but eternal vigilance handles that pretty well too. Props to everyone who moderates and those who wield the out right filters on the backend.

        I wholeheartedly agree. Anything that I can wield against a thought that I deem "wrong" can be used against something that I post that someone else deems is "wrong". Who decides, and how?

        I offer these two quotes for consideration:

        (1) "I disapprove of what you say, but I will defend to the death your right to say it."
        -- Evelyn Beatrice Hall (but there is some uncertainity on the attribution) [quoteinvestigator.com].

        (2) "The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all."
        -- H. L. Mencken [quotationspage.com].

        The moderation system used by SoylentNews, if enough people make the effort to moderate, does a pretty good job of having the dregs fall to the bottom whilst letting the cream rise to the top. Even comments moderated "spam" can be read if you set your threshold to "-1".

        Many thanks to those who make the effort to register here, login, and do what they can to boost the signal/noise ratio.

        --
        Wit is intellect, dancing.
        • (Score: 2) by RandomFactor on Saturday September 01 2018, @05:47PM

          by RandomFactor (3682) Subscriber Badge on Saturday September 01 2018, @05:47PM (#729279) Journal

          Agreed. It's better than average. OTOH there's also some, ...uhhh... slightly non-mainstream but still interesting posters here that would not normally see the light of day.

          It's funny, there's one site I frequent where I have to explicitly look for the maximum downmodded posts (and +1 them) because the prevailing group think is so diametrically opposed to my views that any post I might actually consider insightful or interesting tends to be brutalized by the time I read it.

          --
          В «Правде» нет известий, в «Известиях» нет правды
        • (Score: 1) by anubi on Sunday September 02 2018, @05:33AM (3 children)

          by anubi (2828) on Sunday September 02 2018, @05:33AM (#729425) Journal

          I get the idea that Soylent News is also a testbed for this kind of thing, trying stuff out on a smaller scale among technical professionals first.

          This is my favorite hangout to get whiffs of techie stuff and insights from what I believe to be the world experts in the field... and I am talking about people who actually are familiar with the technical, not the business, end of things. You guys come up with more leads for me - things I never knew even existed - until one of you guys mention it.

          I see this as a specialty site, very similar to TheOilDrum.com ( now on archive status ) was for us oil exploration guys ( which is where I came from ).

          It was very important for each of us to be able to submit stuff to the group for comment. And I believe that kind of thing is even more important here, given how critical our computational and network infrastructure is, yet we have hidden agendas from special interests that try to block our understanding of how things work, especially covert backdoors, when we all know that obscurity is NOT security.

          Having backdoors in our OS is about like having a detailed plan for building an atomic weapon out there, just waiting to fall into the wrong hands. One slipup, and the wrong party will have the power to "upgrade" our whole computational infrastructure to a brick. And we really need to have an active community to keep that from happening. While we may not have the authority needed to keep business executives from doing incredibly stupid things, we can at least know what's apt to happen and prepare for it within the technical community.

          From what I see, this system of community oversight/moderation works extremely well. Kinda like an integrator getting the noise out of a system. Or statistics to get to a more accurate estimation than any of its individual inputs. Like already noted, having a system like this requires a substantial number of us participating in the discussions and moderations, no different than a statistical study requires many samples to get decent results.

          Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

          --
          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
          • (Score: 2) by The Mighty Buzzard on Sunday September 02 2018, @10:35AM (1 child)

            by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Sunday September 02 2018, @10:35AM (#729462) Homepage Journal

            Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

            Well, first you have to figure out how to divide them up when they share the same body. Chainsaws are pretty messy. Axes too.

            --
            My rights don't end where your fear begins.
            • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @01:15PM

              by Anonymous Coward on Sunday September 02 2018, @01:15PM (#729495)

              Look, I'm not sure that will work.
              It's worth a try though.
              There's a jerk at my work we could experiment on?

          • (Score: 3, Interesting) by martyb on Monday September 03 2018, @04:28AM

            by martyb (76) Subscriber Badge on Monday September 03 2018, @04:28AM (#729749) Journal

            Now, the real trick is going to be how do we keep the professionals over here, and keep the kids and jokers over there?

            1. Flag nicks that you perceive to be "Professional" as "friends".
            2. Flag nicks that you perceive to be "kids and jokers" as "foes".
            3. Adjust your preferences and assign:
              • a "+2" adjustment to friend's moderations.
              • a "-6" adjustment to foe's moderations.

            What it does: The actual moderation is unchanged. The resulting apparent moderation can be filtered by adjusting your Threshold and Breakthrough preferences. So, if you set both of those to "0", then whenever a foe posts a comment, the most you should see is just the comment title. OTOH, when a friend posts a comment, even if moderated into oblivion (actual moderation -1) it will still rise above those limits and you will always see their comments.

            NB: That is how it is supposed to work. I only recently remembered this capability in the system and have not tested it. I do not anticipate any problems, but if you DO find a problem please let us know! File a bug, send an email to admin@soylentnews.org, or raise it with someone on staff on IRC.

            Hope that helps!

            --
            Wit is intellect, dancing.
      • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @01:10PM (1 child)

        by Anonymous Coward on Sunday September 02 2018, @01:10PM (#729493)

        I love a bit of slash
        that said the draco/harry stuff is just horrible

    • (Score: 2) by requerdanos on Saturday September 01 2018, @02:04PM (1 child)

      by requerdanos (5997) Subscriber Badge on Saturday September 01 2018, @02:04PM (#729221) Journal

      It wasn't too long ago we had an interesting

      Well, not entirely without its instructional features, but I am not sure too many people actually showed "interest"... I find most of our spammers just mildly annoying in the wasting-my-valuable-time sense.

      show of wits with DN and our TMB over how to sneak in trash posts. My own take on it was DN made it pretty clear that no matter what one could do, a determined DN would get it in.

      While my take seems to have been that while the spammer used several methods such as werd and ©hárâ¢tér substitution (both addressed in the PDF paper in TFA) to evade detection, regular expressions won in the end. (My reason for this view: Haven't seen any of those type of spam posts lately.)

      Interesting that our views should have been opposite from the same event. Food for thought. I suppose it's entirely possible that you're completely correct, and the spammer just lost interest in spamming this site.

    • (Score: 2) by The Mighty Buzzard on Saturday September 01 2018, @02:07PM (4 children)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Saturday September 01 2018, @02:07PM (#729223) Homepage Journal

      I had code sitting in my head that would have dealt with any any repetitive spam we decided to flag, regardless of unicode or pretty much any other tricks tried, but it would have taken a lot of typing and a site update rather than just adding a filter into the db, so I decided it wasn't worth the bother and we both got bored of playing. Plus, like you said, moderation takes care of it well enough.

      --
      My rights don't end where your fear begins.
      • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @01:20AM (3 children)

        by Anonymous Coward on Sunday September 02 2018, @01:20AM (#729375)

        Dick niggers FTW - kept TMB true to his "free speech" promise.

        • (Score: 2) by The Mighty Buzzard on Sunday September 02 2018, @01:36AM (2 children)

          by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Sunday September 02 2018, @01:36AM (#729379) Homepage Journal

          Free speech as defined on this site has never included spam, just for clarity's sake.

          --
          My rights don't end where your fear begins.
          • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @02:46AM (1 child)

            by Anonymous Coward on Sunday September 02 2018, @02:46AM (#729401)

            Maybe, but the countermeasures you took hurt free speech - nobody could use "dick niggers" even in non-spammy expression of speech; in effect you banned some words.

            • (Score: 2) by The Mighty Buzzard on Sunday September 02 2018, @10:42AM

              by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Sunday September 02 2018, @10:42AM (#729464) Homepage Journal

              We also couldn't talk about viagra for a long time but I didn't hear anyone complaining about that. Simple word filters around here are going to happen because they're quick and stop most spammy jackassery. They're also meant to be temporary, lasting only as long as necessary to get the spammer to fuck off to greener sites.

              Any time you're curious about what words are being filtered, ask any admin. It takes like three seconds to look up. Right now the list includes:

              --
              My rights don't end where your fear begins.
    • (Score: 2) by jcross on Saturday September 01 2018, @02:31PM

      by jcross (4009) on Saturday September 01 2018, @02:31PM (#729233)

      Some stuff is easy for people, but difficult for machines.

      In the case of hate speech and offensive content, it's not even easy for people. We can't seem to agree on solid definitions for what constitutes either one. I mean we don't even have agreement among members of the same culture let alone across cultural boundaries. Even in a targetted community like SN, we can't always agree on mods, although I think you're right that the system does about as well as can be expected. It boggles my mind that people expect AI to do better somehow, when we can't even define what "better" would look like, and the current state of art in training machines is akin to digital Skinnerism.

  • (Score: 4, Insightful) by Runaway1956 on Saturday September 01 2018, @08:11AM (5 children)

    by Runaway1956 (2926) Subscriber Badge on Saturday September 01 2018, @08:11AM (#729151) Journal

    All software is pretty static. Software can't be updated and upgraded on an hourly basis - it's written, compiled, tested, released, and put into use. The average "hacker", for want of a better term, has the initiative. He can examine your code, poke it, prod it, kick it around, and watch what it does. When he's gained a little confidence, he can try to break your code. And, you can do nothing, other than to react to the break, days, weeks, months, or even years later.

    All this AI is just software, after all. And the "hackers" are browsing Facefook, Twitter, and all the rest of the "social media" with nothing better to do, than test the software.

    You, the defender - the software writer - can improve your defensive fortress forever. That won't change the fact that the attackers have the initiative, and they are destined to beat you.

    How many protection schemes on software, music, games, or prorietary hardware remain undefeated? Anyone know where I can get a keygen for $software?

    • (Score: 1, Touché) by Anonymous Coward on Saturday September 01 2018, @08:39AM (2 children)

      by Anonymous Coward on Saturday September 01 2018, @08:39AM (#729158)

      All software is pretty static. ... He can examine your code, poke it, prod it, kick it around, and watch what it does.

      Bullshit. All the crypto algos are public, NSA still needs to buy that $5 wrench to get to the encryption key.

      All this AI is just software, after all.

      Da fuck - most of this AI is in the model that one trains - i.e. data.

      You, the defender - the software writer - can improve your defensive fortress forever. That won't change the fact that the attackers have the initiative, and they are destined to beat you.

      1. Oh, wow! The attacker has the initiative. How insightful!
        Do you attend the local tautology club often?
      2. Perfect security does not exist. It is only a balance between the cost of the defender vs the cost of the attacker - if one is some good orders of magnitude lower than the other, that one will "win" most of the time. This is how cryptography works.

      Yes, you are right, today's AI is dumb and adversarial attacks are easy to craft. But you are right for the wrong (or intellectually bland) reasons.

      • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @09:50AM (1 child)

        by Anonymous Coward on Saturday September 01 2018, @09:50AM (#729182)

        nice warping you attempted there before conceding his point!

        "they are destined to beat you."

        the archetypal war between (ordered)day and (chaotic)night requires
        that we walk along the razors edge of culture without falling into the deep on either side.

        • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @10:10AM

          by Anonymous Coward on Saturday September 01 2018, @10:10AM (#729189)

          His point is: "AI Sucks At Stopping Online Trolls Spewing Toxic Comments" with wrong explanations on why is that.
          Where the wrong explanations are relevant.
          Take a trained AI and go trial-and-error-hacker to find the cracks.
          Then find adversarial attacks based on the knowledge on NN and compare the costs between the two approaches.

    • (Score: 2) by coolgopher on Saturday September 01 2018, @08:41AM

      by coolgopher (1157) on Saturday September 01 2018, @08:41AM (#729159)

      Anyone know where I can get a keygen for $software?

      Sure, it's right over there, it comes prepackaged with $malware for your naivite^Wconvenience. Get the .exe file directly, it's quicker and safer than the .zip. ;)

    • (Score: 2) by fritsd on Saturday September 01 2018, @04:29PM

      by fritsd (4586) on Saturday September 01 2018, @04:29PM (#729257) Journal

      All software is pretty static.
      (...)
      All this AI is just software, after all. (...)

      No.

      The functionality of an AI program is much more defined by what kind of data it has been trained on.

      Exhibit # 1: Microsoft "Tay" [soylentnews.org]

      I loved the BBC article, Microsoft chatbot is taught to swear on Twitter [bbc.com] because it has a snapshot
      of someone's funny tweet:

      Gerry

      "Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI

  • (Score: 4, Insightful) by Anonymous Coward on Saturday September 01 2018, @09:07AM (28 children)

    by Anonymous Coward on Saturday September 01 2018, @09:07AM (#729166)

    This very notion of using AI to censor human communications is deeply troubling. The downside far outweighs the possible upside.

    THINK, for heaven's sake : do you REALLY want a world in which you never see anything you deem objectionable ?

    If your answer is yes, you need to think some more, and you need to grow up and understand that you cannot possibly force everyone to behave as you wish.
    at least not in the real world.

    And no, a website is NOT the real world.

    • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @09:41AM

      by Anonymous Coward on Saturday September 01 2018, @09:41AM (#729177)

      +THIS+

      the seething hate (to caricaturize just a bit,) is a projection and reflection of inner-worlds in trouble. as is the fragmented, reactionary, unreflected kind of offensive reply.

      in other words, people's psyche's are not doing well being subjugated/ aka feeling or experiencing their selves devalued. the screaming is symptomatic of the shadow of a system that has overreached -- and mechanical enforcement of the will of a few loud mouth theocrats by technocrats isn't going to work any better than the early communist attempts at using conditioning and psychological means to create the perfect socialists either.

      in case that's not clear enough, the presence of 'adversaries/devils' is built into our archetypal blueprints. they are supposed to enlighten us, free us from the mindless existence in and of walled gardens of any sort. just keep watching how the most capable individuals (in potential/actuality) succumb to opiates and ever more legalized drugs and mechanized sex, trade the capacity to read and think for 'scanning' etc.. to see how

      mechanization perverts liberty.

    • (Score: 1, Informative) by Anonymous Coward on Saturday September 01 2018, @10:03AM (20 children)

      by Anonymous Coward on Saturday September 01 2018, @10:03AM (#729185)

      This very notion of using AI to censor human communications is deeply troubling. The downside far outweighs the possible upside.

      THINK, for heaven's sake : do you REALLY want a world in which you never see anything you deem objectionable ?

      So, you think it's ok to spread lies, propaganda and hate? Enough that it damages society and the culture we live in? Enough that some people start believing it, and believing when you tell them to only trust you?

      There are, and should be, consequences when people lie in an effort to benefit themselves at the cost of others. This is already in place for those who lie regarding financial transactions. Should it not be in place for those who try to turn people against each other solely for the purpose of financial or political gain?

      • (Score: 5, Insightful) by khallow on Saturday September 01 2018, @10:20AM (7 children)

        by khallow (3766) Subscriber Badge on Saturday September 01 2018, @10:20AM (#729190) Journal

        So, you think it's ok to spread lies, propaganda and hate? Enough that it damages society and the culture we live in? Enough that some people start believing it, and believing when you tell them to only trust you?

        How do you filter out "lies, propaganda, and hate" without risk that those filtering tools won't eventually fall into the hands of the people who rely on "lies, propaganda, and hate"? I think there's an obvious lesson to the past 20 years, namely, that your ideological foes will sooner or later get a chance at power. The more power one creates now, the more power they'll have the next time.

        That's why the original poster's concerns are so relevant. It's not just never seeing anything objectionable, it's never seeing anything objectionable to whoever is in power.

        • (Score: 3, Insightful) by Unixnut on Saturday September 01 2018, @11:36AM (3 children)

          by Unixnut (5779) on Saturday September 01 2018, @11:36AM (#729199)

          > How do you filter out "lies, propaganda, and hate"

          I mean shit, forget filtering it. How do you define "lies, propaganda and hate"? Sure, some things are clear cut, but the vast majority is not. Usually what qualifies as "lies, propaganda, and hate" is whatever those in power define it is. It is the most dangerous thing to attempt to censor people, even if you think it wise to do so. Eventually when given the power to censor, those in power will use it to maintain themselves in power, at your expense if needs be.

          It is best to let everyone speak their mind, because they are thinking it whether you like it or not. Letting it out in public lets it be challenged and debated, whereas suppressing it makes those people think they are on the right path, and are being persecuted.

          Driving it underground results in those ideas festering without challenge, being self reinforced within the group, and eventually explodes on the scene when a critical mass of people starts believing it (which, because they are not allowed to speak their mind, nobody can be sure how many people actually think that way in private).

          Censoring people is just a way for those doing the censoring to stick their heads in the sand and pretend their world is as they wish it to be. Eventually reality catches up and smacks them upside the head.

          Plus I don't want to live in a world of thoughtcrime, even though it seems there is a sizable minority (even within the tech community) that desires quite such a world.

          Oh, and machines suck at filtering "lies, propaganda, and hate", because its very hard to define such things in a clear and logical manner. Saying the "sky is pink" is a lie, but it can also be a joke, or sarcasm, or it could be code word meaning something more offensive. How can an algorithm know that?

          • (Score: 1) by khallow on Saturday September 01 2018, @11:44AM

            by khallow (3766) Subscriber Badge on Saturday September 01 2018, @11:44AM (#729201) Journal
            I guess my point is that even if you magically get a tool capable of doing what you want, it's a weapon ready to be used against you. The idea fails on so many levels.
          • (Score: 5, Insightful) by bzipitidoo on Saturday September 01 2018, @01:17PM (1 child)

            by bzipitidoo (4388) on Saturday September 01 2018, @01:17PM (#729215) Journal

            This is like figuring out how to set the "evil bit". Also, Bowdlerization, named after a 19th century guy who tried to sanitize fiction. He replaced profanity with milder language, tried to edit out sexual innuendos, subversive ideas, and so on, and ended up ruining the story. Some of the TV censorship they used to try in 1950s and 1960s America is just nuts. The Ed Sullivan Show censored musicians, so, for instance The Rolling Stones "Let's Spend The Night Together" was changed to "Let's Spend Some Time Together". Now most people appreciate that trying to hide the existence of sex from teenagers doesn't work, doesn't fool them for long, and often doesn't end well. Even dictionaries practiced censorship. I had a 1948 Websters that defined "masturbate" with just 2 words: "self pollution". (That dictionary also had an entry for "yellow peril". Yeah, it was extremely racist.) I think also that squeamishness about digestion has lessened, and a good thing too, as related medical problems often went untreated and even unrecognized thanks to ignorance on that subject. I have read that there was a lively debate on Wikipedia over whether to include a picture of human poop on the page about feces, finally resolved in favor of having the pictures.

            Other terrible uses of censorship are to cater to racism and other forms of discrimination, and to suppress dissent. Star Trek (the original series) was the first to have a scene in which a white and a black kissed. The first time I saw it, I had no idea that scene was such a big deal. But Star Trek did a lot more than that. The censors also didn't like criticism of the Vietnam War, and Star Trek worked that in too, and got it past the censors by distracting them with sex. That's the chief reason why the female crew members in Star Trek had such short, short uniforms. Of course it was also because sex does sell, but mainly it was a calculated distraction not for the audience, but for the censors so that they'd be so busy censoring out the boatloads of sex that they missed the veiled references to the stupidities of the Vietnam War.

            Conservatives try to get messages across to liberals, but the liberals aren't listening too well, very aggravatingly dismissing the conservatives as idiots and all their thinking as stupid. (Mind you, the contempt and refusal to acknowledge facts is even thicker in the other direction. Further, the media loves to fan the flames, to make "good copy".) That message is that life has its ugly sides. Conservatives are particularly focused on the fact that life is highly competitive, and see liberals as fools for not appreciating that enough. They have good reason to view outsiders as foes looking to compete with us for limited resources, because they'd do it themselves to those outsiders. At the least, they want to maintain a show of strength so those others don't start to get certain ideas along those lines. Such messages are particularly vulnerable to being thought bad and deserving of censorship.

            • (Score: 1) by khallow on Saturday September 01 2018, @09:38PM

              by khallow (3766) Subscriber Badge on Saturday September 01 2018, @09:38PM (#729334) Journal

              They have good reason to view outsiders as foes looking to compete with us for limited resources, because they'd do it themselves to those outsiders.

              I suppose there is a modest amount of projection there. But really this sort of automated censorship is so bad that one doesn't need to have a conservative viewpoint to see the problems. So much of the argument for this sort of thing is "A is bad. B solves A. Thus, we should do B." without regard for whether either of the first two statements is correct (though I grant the stereotypical hate speech is bad in at least a couple of relevant ways in this case) nor considering the cost of B.

        • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @06:28PM (2 children)

          by Anonymous Coward on Saturday September 01 2018, @06:28PM (#729299)

          How do you filter out "lies, propaganda, and hate" without risk ?

          The obvious debuttal, obviously, is to start with the egregious cases. #Bankhallow!!!

          • (Score: 1) by khallow on Sunday September 02 2018, @02:23AM (1 child)

            by khallow (3766) Subscriber Badge on Sunday September 02 2018, @02:23AM (#729395) Journal
            So what are you going to do when it's your turn to become the egregious case?
            • (Score: 1, Funny) by Anonymous Coward on Sunday September 02 2018, @10:18AM

              by Anonymous Coward on Sunday September 02 2018, @10:18AM (#729459)

              #banmorekhallow!

      • (Score: 3, Insightful) by The Mighty Buzzard on Saturday September 01 2018, @02:20PM (7 children)

        by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Saturday September 01 2018, @02:20PM (#729230) Homepage Journal

        So, you think it's ok to spread lies, propaganda and hate? Enough that it damages society and the culture we live in? Enough that some people start believing it, and believing when you tell them to only trust you?

        Abso-fucking-lutely. Why? Because someone has to be in charge of deciding what constitutes "lies, propaganda and hate" and that's power just fucking begging to be abused.

        --
        My rights don't end where your fear begins.
        • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @07:01PM (6 children)

          by Anonymous Coward on Saturday September 01 2018, @07:01PM (#729310)

          You have to treat 'incitement' and advocacy the same way. Speech is merely speech. Only one person is responsible for the decisions he makes.

          • (Score: 1) by khallow on Sunday September 02 2018, @02:29AM (5 children)

            by khallow (3766) Subscriber Badge on Sunday September 02 2018, @02:29AM (#729397) Journal

            You have to treat 'incitement' and advocacy the same way.

            Not at all. If someone is organizing attacks or other violence via public communication (for example, the genocides in Rwanda were often directed via radio stations), that's not legitimate discourse.

            • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @03:50PM (1 child)

              by Anonymous Coward on Sunday September 02 2018, @03:50PM (#729555)

              Letting the government decide what is and is not legitimate discourse is dangerous. People are responsible for their own actions. If someone chooses to listen to someone preaching violence, then that is on the person who chose to listen.

              • (Score: 1) by khallow on Monday September 03 2018, @12:41AM

                by khallow (3766) Subscriber Badge on Monday September 03 2018, @12:41AM (#729705) Journal

                Letting the government decide what is and is not legitimate discourse is dangerous.

                How about a jury of your peers?

            • (Score: 0) by Anonymous Coward on Monday September 03 2018, @01:11AM (2 children)

              by Anonymous Coward on Monday September 03 2018, @01:11AM (#729711)

              that's not legitimate discourse.

              It's not for you to decide what is "legitimate discourse"... A person's decision to act violently is entirely personal, and only he is responsible. The participants are responsible for the attacks, not the "organizers"

              via public communication

              Oh, I see. It should all be done in secret, like the order to drop the bomb on Hiroshima

              • (Score: 1) by khallow on Monday September 03 2018, @06:27PM (1 child)

                by khallow (3766) Subscriber Badge on Monday September 03 2018, @06:27PM (#729925) Journal

                It's not for you to decide what is "legitimate discourse"...

                It's estimated that 900k people died as a result of the genocide and its coordination via public media like radio stations. I think I can make that judgment just fine.

                A person's decision to act violently is entirely personal, and only he is responsible.

                But there's a big difference between having a bunch of people who are willing to act violently, and having those people act violently in a way that causes a lot of damage. Coordinated violence can be a lot more harmful than uncoordinated violence.

                Oh, I see. It should all be done in secret, like the order to drop the bomb on Hiroshima

                That raises the threshold on coordinated violence.

                • (Score: 0) by Anonymous Coward on Wednesday September 05 2018, @11:44PM

                  by Anonymous Coward on Wednesday September 05 2018, @11:44PM (#731022)

                  There is no logic in that whatsoever. You are merely playing a numbers game. The people who decide raise the sword are the only ones to blame for the bloodshed.

      • (Score: 2, Informative) by Anonymous Coward on Saturday September 01 2018, @02:38PM (1 child)

        by Anonymous Coward on Saturday September 01 2018, @02:38PM (#729238)

        "So, you think it's ok to spread lies, propaganda and hate? Enough that it damages society and the culture we live in? Enough that some people start believing it, and believing when you tell them to only trust you?"

        .

        You have absolutely not thought through all the ramifications of what you apparently believe is a desirable "solution".

        Whether you realize it or not, in your scheme, whoever is in power gets to decide what is a lie, what is hateful, and what is propaganda. And that gives the entity in power a lot more power than any entity which governs people should ever have, if the well-being of those who are governed is considered.

        The importance of the dangerous downside of filtering is illustrated in the book "1984", in which there was an authority called "The Ministry of Truth". I doubt you have read this book. You NEED to read it !!! It is a book that is very important because of the lessons it contains which pertain to a totalitarian state and the behavior of that state toward the general population.

        You are ( obviously ) motivated by fear, which is a dangerous motivation when problems which require calm logical analysis are at hand. You obviously want a world which is nice and friendly and safe and not offensive. That is a nice dream, but realistically it is simply not possible, PERIOD. Those people who would like to gain power over you and the rest of us want you to believe it is possible so you accept their scheme of manipulating information. Their scheme of manipulating information will have a lot of appeal to some people who think on a childish level, but the very same scheme will correctly seem monstrous to people who have thought through the full implications of such a scheme.

        I sincerely do not mean any disrespect, but if you are willing to embrace some authority filtering all news and speech so it is "sanitized", you are a simple-minded
        shortsighted person. And there are a lot of you out there. You are willing to walk to the slaughterhouse ( metaphorically speaking ) without being forced to do so, and you are compliant when you face those who would be your oppressor. It is impossible for me not to have serious concern for your mindset, because you are a gullible naive person who will accept awful things which are disguised by those in power as good things, and if there are enough of you and you all vote, society is FUCKED.

        It is far better to allow ALL speech and use your own brain to filter it. If you are incapable of doing this, or unwilling to do this, it is not reasonable of you to expect the rest of us to fall into line with your acceptance of your desire to be "protected" by an agency which filters information. In a very real sense, if you accept such filtering you are as bad as any other enemy of freedom. Wars have been fought and many people have died to protect the freedom you want to willingly surrender. I hope you spend some more time thinking through this stuff so you realize that your notion of filtering comes with a price that is far too high and will always be too high for those who love freedom.

        I am ready to fight and die to make sure stuff like you want doesn't happen. I know you are not ready to do die for anything, because you are a coward. You just want a Walt Disney world which is sanitized so your sheep brain is more comfortable. Do not expect the rest of us to fall into line with your vision of how the world should be, because it is NOT GOING TO HAPPEN. I sincerely hope you come to realize the errors in your thinking and as a result you decide to join the adult world, where we accept freedom of speech because the alternative is tantamount to being enslaved.

        • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @01:27AM

          by Anonymous Coward on Sunday September 02 2018, @01:27AM (#729376)

          "I am ready to fight and die to make sure stuff like you want doesn't happen."
          .
          Oh, by all means, go for it. Not a moment too soon, you are under attack and they are winning. Don't forget to take your medication though.

      • (Score: 2) by Thexalon on Saturday September 01 2018, @04:42PM

        by Thexalon (636) on Saturday September 01 2018, @04:42PM (#729260)

        you think it's ok to spread lies, propaganda and hate?

        Who gets to decide what's a lie, what's propaganda, and what's hate? It's not the computer: The AI only does what it's told to do, and is easily confused. As an example of how easy it is to confuse an AI: Should the sequence of characters " ZOG " be censored? Probably you think yes if they're referring to an anti-semitic acronym. But that could also refer to a sports organization [zogsports.com], or a scene from Babylon 5 [youtube.com], or probably a few other things, and an AI is going to have a tough time figuring out which one applies.

        Enough that it damages society and the culture we live in?

        What exactly do you mean by this? Do you mean "Somebody I didn't like got elected to public office?"

        Enough that some people start believing it, and believing when you tell them to only trust you?

        Suckers have always been around.

        There are ways of remedying this, but they don't involve censorship. Instead, what you have to do is teach critical thinking skills so that people are better at spotting lies and propaganda. You teach them all about logical fallacies, the techniques of propaganda, how to go about fact-checking for real, and of course give them a basic skepticism about the information they get.

        The powers-that-be generally don't like this solution, because they know that if they do this they will now be faced with a population that no longer believes *their* lies and propaganda: "Everything goes better with Coca-Cola." "The war effort is to protect you." "$POLITICIAN is your friend." "The police are there to help you." "Your car needs to be bigger, faster, louder, stronger, more manly." "This pill will fix everything." "You need more stuff." "Stand when they sing this song because freedom." "If your nearest professional sports team wins, that matters to you." "This superfood will cure cancer." You get the idea.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @06:52PM

        by Anonymous Coward on Saturday September 01 2018, @06:52PM (#729309)

        So, you think it's ok to spread lies, propaganda and hate?

        Yes, of course it is! Filtering is the audience's obligation, not the speaker's

        Enough that some people start believing it...?

        Ah, there's your problem right there. The believers. Go after them, not the speaker. Rush Limbaugh isn't dangerous, his listeners are. Republicans and democrats aren't the bad guys, their voters are. 95% reelection rates speak much louder than all the complaining.

    • (Score: 2) by requerdanos on Saturday September 01 2018, @02:18PM (5 children)

      by requerdanos (5997) Subscriber Badge on Saturday September 01 2018, @02:18PM (#729228) Journal

      False premise:

      Hate speech only exists in the eye of the beholder

      Supported but poorly by clumsy strawman:

      This very notion of using AI to censor human communications is deeply troubling. The downside far outweighs the possible upside. THINK, for heaven's sake : do you REALLY want a world in which you never see anything you deem objectionable ?

      I want the people whose speech I don't want to see in a certain place (which you inaccurately call "anything deemed objectionable") to put the speech somewhere else. There's room for everybody; just because you are all holy-roller on wanting people to see objectionable things doesn't give them the right to clutter up any random (or specific third-party) forum or news feed with them.

      There is a big difference between finding something objectionable because I disagree with it (such as your nonsense here) and finding something objectionable because it's inappropriate for the context (such as the "déck herders we dón't type no young words"* and "murder snuff spam" we've seen on discussions in this site).

      Your opinions here, though vapid, advance discussion, and so contribute something positive, however slight.

      The spams I mention, here, and false or invented stories on a supposedly trustworthy news site, or hateful attacks in a community discussion forum, don't.

      -------
      * Mildly disappointed that this didn't get flagged as a spam pattern. Poor AI!

      • (Score: -1, Flamebait) by Anonymous Coward on Saturday September 01 2018, @02:41PM (2 children)

        by Anonymous Coward on Saturday September 01 2018, @02:41PM (#729241)

        1) You are wrong.

        1) a) You are an arrogant prick.

        2) when the time comes it will be my pleasure to fight against people like you and wipe you off the face of the earth in order to preserve freedom.

        • (Score: 2) by requerdanos on Saturday September 01 2018, @02:50PM

          by requerdanos (5997) Subscriber Badge on Saturday September 01 2018, @02:50PM (#729245) Journal

          You are wrong... You are an arrogant prick...it will be my pleasure to [wipe] people like you...off the face of the earth in order to preserve freedom.

          The irony-troll force is strong with this one. :)

        • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @01:30AM

          by Anonymous Coward on Sunday September 02 2018, @01:30AM (#729377)

          "2) when the time comes it will be my pleasure to fight against people like you and wipe you off the face of the earth in order to preserve freedom."
          .
          That time has passed and you did nothing. Now live with the shame for your entire life!

      • (Score: 1) by realDonaldTrump on Saturday September 01 2018, @09:47PM (1 child)

        by realDonaldTrump (6614) on Saturday September 01 2018, @09:47PM (#729338) Homepage Journal

        When you disagree you say, "mod parent to oblivion!"

        • (Score: 2) by requerdanos on Saturday September 01 2018, @11:23PM

          by requerdanos (5997) Subscriber Badge on Saturday September 01 2018, @11:23PM (#729356) Journal

          There is a big difference between finding something objectionable because I disagree with it...and finding something objectionable because it's inappropriate for the context (such as the [spam] we've seen on discussions in this site).

          When you disagree you say, "mod parent to oblivion!"

          You've picked the wrong side: When I disagree I reply, or let it go. When it's junk that clutters up the thread, I mod down; when just spam, I mod spam. For repeat offender trolls that others are overly tempted to feed, you'll get a call to mod parent to oblivion.

          Thanks for noticing.

  • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @10:07AM (2 children)

    by Anonymous Coward on Saturday September 01 2018, @10:07AM (#729188)

    Of course artificial intelligence won't work against trolls. They are not posting from a position of intelligence or thought, the are posting with what most would see as the illogical motivation to provoke others for their own entertainment or profit.

    It's got to be hard to train AI to combat those who are so outside the norm.

    • (Score: 3, Insightful) by The Mighty Buzzard on Saturday September 01 2018, @02:37PM (1 child)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Saturday September 01 2018, @02:37PM (#729237) Homepage Journal

      ...outside the norm.

      Um... You haven't been on the Internet long, have you?

      --
      My rights don't end where your fear begins.
      • (Score: 1, Funny) by Anonymous Coward on Saturday September 01 2018, @05:46PM

        by Anonymous Coward on Saturday September 01 2018, @05:46PM (#729278)

        Almost three weeks, why? Are there levels based on seniority?

  • (Score: 2) by requerdanos on Saturday September 01 2018, @12:18PM

    by requerdanos (5997) Subscriber Badge on Saturday September 01 2018, @12:18PM (#729207) Journal

    Such systems... failed to recognize foul language when subtle changes were made... Adversarial examples can be created automatically

    Makes sense. I read once [lesswrong.com] (a probably apocryphal tale) about a system trained to look at photos and recognize threatening vehicles such as military tanks, and with their training data set, got in the 90+% accuracy rate. Then they tried their shiny new system on arbitrary photos with and without military vehicles and it was clueless. Further examination showed that sunny vs. cloudy differed in the training images more than presence vs. absence of target vehicles, but the overall illustration is "the system sucks if the training data sucks".

    The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted.

    For example, a machine just looking for hate speech might accept "blindsighted" as a word, but many humans would know better [oxforddictionaries.com] based on simple everyday knowledge, perhaps aided by a dictionary lookup. This suggests "attacks" on the algorithms based on other made up words containing semi-soundalike alternate words (dome-mass deskhead?). Kind of ironic that the reporting contains what the paper bemoans, though.

  • (Score: 0) by Anonymous Coward on Saturday September 01 2018, @02:35PM (1 child)

    by Anonymous Coward on Saturday September 01 2018, @02:35PM (#729235)

    Perhaps we can find a lesson in the story from early days of ITS at the MIT AI Lab? Not sure where I read the story first, but this page https://en.wikipedia.org/wiki/Incompatible_Timesharing_System [wikipedia.org] has a short summary:

    To deal with a rash of incidents where users sought out flaws in the system in order to crash it, a novel approach was taken. A command that caused the system to crash was implemented and could be run by anyone, which took away all the fun and challenge of doing so. It did, however, broadcast a message to say who was doing it.

    Not suggesting that this is a reasonable solution for trolling. What I am suggesting is that maybe there is an analogous approach that removes the troll's motivation or challenge.

    • (Score: 1, Insightful) by Anonymous Coward on Saturday September 01 2018, @07:10PM

      by Anonymous Coward on Saturday September 01 2018, @07:10PM (#729313)

      What I am suggesting is that maybe there is an analogous approach that removes the troll's motivation or challenge.

      By far the very best approach is to simply not respond, in any way. They are attention seekers. That is the prize.

  • (Score: 2) by Entropy on Saturday September 01 2018, @09:12PM (1 child)

    by Entropy (4228) on Saturday September 01 2018, @09:12PM (#729327)

    Their special special very fragile feelings must be protected by AI. We'll call anything that offends their delicate psyche "hate speech". We can't allow them to see anything they might disagree with or it'll hurt their delicate feelings.

    • (Score: 0) by Anonymous Coward on Sunday September 02 2018, @04:57PM

      by Anonymous Coward on Sunday September 02 2018, @04:57PM (#729573)

      Chelsea Manning!

(1)