Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Friday March 25 2016, @05:04AM   Printer-friendly
from the truth-is-online dept.

Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.

The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.

Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.

As of this submission, the bot has been shut down, and all but 3 tweets deleted.

The important question is whether or not it succeeded in passing the Turing test.

takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday March 25 2016, @05:33AM

    by Anonymous Coward on Friday March 25 2016, @05:33AM (#322790)

    If an "AI" can't tell that hitler was naughty boy there's no point in applying the turing test.

    I remember running some AI chatbot in dos in the 90s which very quickly turned into an asshole because of my smartass responses to it's questions. 20 years later this chatbot is basically the same: "if I don't understand, agree" and "parrot what has been said to me".

    The only "news" here is that microsoft thought this AI was worthy of promotion.

  • (Score: 2) by dyingtolive on Friday March 25 2016, @06:06AM

    by dyingtolive (952) on Friday March 25 2016, @06:06AM (#322799)

    It does seem evolved, it picks people out of pictures and comments on them. That's not something we had 20 years ago. The response to suicide indicates some sense of looking at least for particular keywords also. Those aren't AI improvements though.

    For all of our advancements, I don't know if we're going to be able to ever create an AI that's fully capable of mimicking a human being with digital devices without some sort of (probably hardware) "fuzzy logic" device that causes the occasional regrettable action. All of this would probably actually be a good learning experience, if somehow they could teach it some sense of right from wrong. In my experience we, as humans, don't really grow from our successes, we do so from all the terrible shit we've done, be that from first hand, or from history books.

    --
    Don't blame me, I voted for moose wang!
    • (Score: 0) by Anonymous Coward on Friday March 25 2016, @06:13AM

      by Anonymous Coward on Friday March 25 2016, @06:13AM (#322800)

      AI will be dead simple once we understand how our brains work. But I doubt that'll happen in our lifetimes.

      Until then we're trying to write code to play a game that we don't know the rules of.

  • (Score: 1, Insightful) by Anonymous Coward on Friday March 25 2016, @12:55PM

    by Anonymous Coward on Friday March 25 2016, @12:55PM (#322878)

    If an "AI" can't tell that hitler was naughty boy there's no point in applying the turing test.

    Though I think Hitler was the personification of evil not everyone feels he was a bad person or even wrong in his actions or beliefs. These individuals would pass a Turing test.