Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Friday March 25 2016, @05:04AM   Printer-friendly
from the truth-is-online dept.

Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.

The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.

Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.

As of this submission, the bot has been shut down, and all but 3 tweets deleted.

The important question is whether or not it succeeded in passing the Turing test.

takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by mcgrew on Friday March 25 2016, @01:50PM

    by mcgrew (701) <publish@mcgrewbooks.com> on Friday March 25 2016, @01:50PM (#322896) Homepage Journal

    So... it's Microsoft's fault for not installing a troll-filter.

    Of course it is! I doubt there's a single person anywhere who doesn't know the internet is full of evil. Trolls, scammers, vandals, and they were brain-dead stupid for not taking measures.

    Hell, I wrote a chatbot program way back in 1983 that ran in 16k of memory and no disk that responded to racial slurs and profanity with put downs of the person at the keyboard.

    --
    Carbon, The only element in the known universe to ever gain sentience
    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by Tork on Friday March 25 2016, @03:51PM

    by Tork (3914) on Friday March 25 2016, @03:51PM (#322934)
    Right, you pre-programmed morality into your AI. Nothing wrong with that, but I'd expect MS would be criticized for doing that, too.
    --
    🏳️‍🌈 Proud Ally 🏳️‍🌈
    • (Score: 2) by frojack on Friday March 25 2016, @05:31PM

      by frojack (1554) Subscriber Badge on Friday March 25 2016, @05:31PM (#322987) Journal

      Maybe they would be criticized, maybe not. I suspect not, especially if done right, or at least subtly, or a modicum of thought.

      They could have simply had a bunch of words that would be ignored, not triggering a put-down or a comeback or a retaliation. Maybe Just label some of these words "bait" and let the bait-response routine do what ever.

      Sort of like we expect our kids to respond. We caution our Offspring AI (authentic Intelligence) not to assume the attitudes or speech of every person they happen to bump into. Seems silly to unleash a program onto the wide woolly internet without even a modicum of caution.

      --
      No, you are mistaken. I've always had this sig.