Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.
The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.
Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.
As of this submission, the bot has been shut down, and all but 3 tweets deleted.
The important question is whether or not it succeeded in passing the Turing test.
takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.
(Score: 2) by frojack on Friday March 25 2016, @05:31PM
Maybe they would be criticized, maybe not. I suspect not, especially if done right, or at least subtly, or a modicum of thought.
They could have simply had a bunch of words that would be ignored, not triggering a put-down or a comeback or a retaliation. Maybe Just label some of these words "bait" and let the bait-response routine do what ever.
Sort of like we expect our kids to respond. We caution our Offspring AI (authentic Intelligence) not to assume the attitudes or speech of every person they happen to bump into. Seems silly to unleash a program onto the wide woolly internet without even a modicum of caution.
No, you are mistaken. I've always had this sig.
(Score: 2) by mcgrew on Saturday March 26 2016, @02:53PM
I suspect not, especially if done right
When did Microsoft ever do anything right??
Carbon, The only element in the known universe to ever gain sentience