Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.
The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.
Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.
As of this submission, the bot has been shut down, and all but 3 tweets deleted.
The important question is whether or not it succeeded in passing the Turing test.
takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.
(Score: 1, Informative) by Anonymous Coward on Friday March 25 2016, @05:51AM
"poisoning" is a pretty common term for deliberately feeding bad information into a program for one's own purposes.
botnets get poisoned all the time to stop them talking to their next c&c server.
(Score: 4, Insightful) by frojack on Friday March 25 2016, @05:17PM
But you fail to understand that the very purpose of this particular AI Bot was to learn from human interactions and to mimic humans more closely after each input.
How did Microsoft NOT anticipate the need for some form of fail-safe in a system like that?
Unless you had a desired outcome, and some code to guarantee it eventually arrive at that desired outcome, any outcome that happens is a perfectly valid outcome. The fact that un-coordinated random people could so easily shine this bot on, speaks to its infantile design, and sophomoric programming.
They could have learned a lot from the Talking Angela app.
No, you are mistaken. I've always had this sig.