Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.
The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.
Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.
As of this submission, the bot has been shut down, and all but 3 tweets deleted.
The important question is whether or not it succeeded in passing the Turing test.
takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.
(Score: 2) by dyingtolive on Friday March 25 2016, @06:06AM
It does seem evolved, it picks people out of pictures and comments on them. That's not something we had 20 years ago. The response to suicide indicates some sense of looking at least for particular keywords also. Those aren't AI improvements though.
For all of our advancements, I don't know if we're going to be able to ever create an AI that's fully capable of mimicking a human being with digital devices without some sort of (probably hardware) "fuzzy logic" device that causes the occasional regrettable action. All of this would probably actually be a good learning experience, if somehow they could teach it some sense of right from wrong. In my experience we, as humans, don't really grow from our successes, we do so from all the terrible shit we've done, be that from first hand, or from history books.
Don't blame me, I voted for moose wang!
(Score: 0) by Anonymous Coward on Friday March 25 2016, @06:13AM
AI will be dead simple once we understand how our brains work. But I doubt that'll happen in our lifetimes.
Until then we're trying to write code to play a game that we don't know the rules of.