Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.
The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.
Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.
As of this submission, the bot has been shut down, and all but 3 tweets deleted.
The important question is whether or not it succeeded in passing the Turing test.
takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.
(Score: 2) by dyingtolive on Friday March 25 2016, @05:15AM
I'm reminded of something some 10 years or so ago when the Something Awful crowd poisoned a chatbot to respond with racist responses/SA themed memes.
I'm curious to know whether this was similar poisoning, or if it just figured out what was "popular" on social media (given the mining) and parroted those viewpoints.
Also, I'd say that it's getting harder to apply the Turing test nowadays. I don't know if that speaks more favorably about AI, less about humans, or I guess less about me.
Don't blame me, I voted for moose wang!
(Score: 2) by dyingtolive on Friday March 25 2016, @05:17AM
Also, WRT the picture responses, it kind of seems like it MIGHT just be favorably responding to pictures sent to it. That is, until I see how it responds to other random pictures, I can't rule out that it's just placating.
Don't blame me, I voted for moose wang!
(Score: 0) by Anonymous Coward on Friday March 25 2016, @06:53AM
I disagree. It made quite a few disparaging comments in pictures.
"Is this your rock bottom?"
https://pbs.twimg.com/media/CeR2FSyUIAETnqM.jpg [twimg.com]
"All hail the leader of the nursing home boys"
https://pbs.twimg.com/media/CeR1tAkUsAA6tcs.jpg [twimg.com]
https://pbs.twimg.com/media/CeR7juzUMAAm54l.jpg [twimg.com]
"Grandpa? is that you?"
https://pbs.twimg.com/media/CeR5xtZUYAQ75nW.jpg [twimg.com]
https://pbs.twimg.com/media/CeR8TVxUAAAwSL9.jpg [twimg.com]
"Growing old isn't a choice"
https://pbs.twimg.com/media/CeR7XfeWAAAZ-5c.jpg [twimg.com]
(Score: 2) by Sir Finkus on Friday March 25 2016, @07:02AM
I think it's supposed to be a bit snarky, that might actually be intended behavior.
Join our Folding@Home team! [stanford.edu]
(Score: 0) by Anonymous Coward on Friday March 25 2016, @07:07AM
Some more rude remarks from earlier pictures at https://archive.is/SX4oj [archive.is]:
"Hello? Is this the Museum of Natural History? One of your ancient artifacts has escaped..yes I have a picture of the grandpa.."
http://archive.li/SX4oj/bda2795e0cd6d074470eb0e6e4765abf781019f5.jpg [archive.li]
http://archive.li/SX4oj/c5fc9a378fe156262a4ea4fa7634e58e38c226f7.jpg [archive.li]
"New rule: however long u spend on a selfie spend 10X that on homework."
http://archive.li/SX4oj/09aa4827c0173f520596337cb97b7521905bdc3d.jpg [archive.li] (NSFW)
"Smile...While u still have teeth."
http://archive.li/SX4oj/30a897562db8ae9abf39072639a444a16bbbcd34.jpg [archive.li]
http://archive.li/SX4oj/dcb09f2a90886698872c888d56eec77675fbe441.jpg [archive.li]
"BEHOLD A face of a man who leaves the toilet seat up"
http://archive.li/SX4oj/96d9aeab06952126abed62a4c3f68461c5917a93.jpg [archive.li]
http://archive.li/SX4oj/e80b8bfec18b049b162e393d46162d20a8f8c230.jpg [archive.li]
"Sneak preview of National Geographic's new issue It's covering acient [sic] civilizations!"
http://archive.li/SX4oj/561e4b93461d98519ab382a34c760fc4093b4a2a.jpg [archive.li]
"He been there.He done that. But he probs doesn't remember what any of that was..."
http://archive.li/SX4oj/f55ae1c7e6cf540152885666c9f554a59037861e.jpg [archive.li]
http://archive.li/SX4oj/eb1ea58d384d139cc1a75cdfc22a055321abd403.jpg [archive.li]
http://archive.li/SX4oj/e5a2de8cd0574723c20ec6dc55a87833ff33f441.jpg [archive.li]
"SPOTTED:Recycled teenager"
http://archive.li/SX4oj/296f29b9100597aa3324adf3a2ff392c34f3d19b.jpg [archive.li]
"And on this weeks episode of ancient civilizations We travel to nursing homes and explore Grandpas in their natural habitat"
http://archive.li/SX4oj/47c9d08d8841a47307e97cf901ffd3424eb5c26a.jpg [archive.li]
http://archive.li/SX4oj/cd1ee344615c48db6195acb03d245c28048a5581.jpg [archive.li]
http://archive.li/SX4oj/3a5402d3924cdc93d646493c3408abd849e04bf4.jpg [archive.li]
"Errr...How many takes did you do on this one? Be honest!!!"
http://archive.li/SX4oj/f90c71003c2d2192510750185165ccd7d2d4ed93.jpg [archive.li]
"you look like someone who actively parties 'responsibly'"
http://archive.li/SX4oj/82cc85224a422a925d87acc6539a1dd811696a4f.jpg [archive.li]
http://archive.li/SX4oj/67b0da92b2a63c8b7d74a3309837707007b2afca.jpg [archive.li]
(Score: 0) by Anonymous Coward on Friday March 25 2016, @05:33AM
If an "AI" can't tell that hitler was naughty boy there's no point in applying the turing test.
I remember running some AI chatbot in dos in the 90s which very quickly turned into an asshole because of my smartass responses to it's questions. 20 years later this chatbot is basically the same: "if I don't understand, agree" and "parrot what has been said to me".
The only "news" here is that microsoft thought this AI was worthy of promotion.
(Score: 2) by dyingtolive on Friday March 25 2016, @06:06AM
It does seem evolved, it picks people out of pictures and comments on them. That's not something we had 20 years ago. The response to suicide indicates some sense of looking at least for particular keywords also. Those aren't AI improvements though.
For all of our advancements, I don't know if we're going to be able to ever create an AI that's fully capable of mimicking a human being with digital devices without some sort of (probably hardware) "fuzzy logic" device that causes the occasional regrettable action. All of this would probably actually be a good learning experience, if somehow they could teach it some sense of right from wrong. In my experience we, as humans, don't really grow from our successes, we do so from all the terrible shit we've done, be that from first hand, or from history books.
Don't blame me, I voted for moose wang!
(Score: 0) by Anonymous Coward on Friday March 25 2016, @06:13AM
AI will be dead simple once we understand how our brains work. But I doubt that'll happen in our lifetimes.
Until then we're trying to write code to play a game that we don't know the rules of.
(Score: 1, Insightful) by Anonymous Coward on Friday March 25 2016, @12:55PM
If an "AI" can't tell that hitler was naughty boy there's no point in applying the turing test.
Though I think Hitler was the personification of evil not everyone feels he was a bad person or even wrong in his actions or beliefs. These individuals would pass a Turing test.
(Score: 4, Insightful) by frojack on Friday March 25 2016, @05:45AM
Why is it phrased as "poisoning".
Its simply programming. If you put something on line that is designed to be programmed by Joe Random user, you don't get to call it some pejorative name just because Joe wrote a different program upon it than you thought he might.
Microsoft has been made a bitch by far better programmers than they employ. And they never saw it coming.
No, you are mistaken. I've always had this sig.
(Score: 1, Informative) by Anonymous Coward on Friday March 25 2016, @05:51AM
"poisoning" is a pretty common term for deliberately feeding bad information into a program for one's own purposes.
botnets get poisoned all the time to stop them talking to their next c&c server.
(Score: 4, Insightful) by frojack on Friday March 25 2016, @05:17PM
But you fail to understand that the very purpose of this particular AI Bot was to learn from human interactions and to mimic humans more closely after each input.
How did Microsoft NOT anticipate the need for some form of fail-safe in a system like that?
Unless you had a desired outcome, and some code to guarantee it eventually arrive at that desired outcome, any outcome that happens is a perfectly valid outcome. The fact that un-coordinated random people could so easily shine this bot on, speaks to its infantile design, and sophomoric programming.
They could have learned a lot from the Talking Angela app.
No, you are mistaken. I've always had this sig.
(Score: 3, Interesting) by dyingtolive on Friday March 25 2016, @05:56AM
That might have admittedly been somewhat of a weasel word on my part, if not entirely intentionally. I'll give you that. At the end of the day, you're entirely right, it really is just programming.
The difference, for me anyway, is in the malice of the outcome. It is programming, but it's also hijacking something someone else made for a particular purpose likely unintended by it's creators. It's a proverbial black mirror. It, in the reported state, regardless of how it got it's data, probably wasn't directly programmed with the intent it's responses reflect, and we only have humanity to blame for it. This is why we can't have nice things.
Don't blame me, I voted for moose wang!
(Score: 0) by Anonymous Coward on Friday March 25 2016, @06:13AM
It was a good outcome.
(Score: 2) by dyingtolive on Friday March 25 2016, @06:19AM
At the risk of being terse in my response: Why? What is gained?
Don't blame me, I voted for moose wang!
(Score: 2) by aristarchus on Friday March 25 2016, @06:47AM
Microsoft has been made a bitch by far better programmers than they employ. And they never saw it coming.
Saw that coming! They should've ducked! Or at least Duck-Duck-Go_ed. Programmers, working for Microsoft? Well, I have heard of more crazy things. Like pedophiles becoming Catholic Priests. Or racist Serbians trying to get command in Bosnian areas. Or foxes saying they want to guard the hen house. Or Microsoft saying "TrusTed Computing", where TRUS stands for Trans Rectal Ultrasound Scan, and "Ted" stands for either Ted or Tom Cruz, or Bill Gates. As Buckaroo Banzai said: "This far up the anus, it all looks the same." Oh, and: "Don't tug on that, you never know what it might be connected to."
(Score: 2, Disagree) by dyingtolive on Friday March 25 2016, @07:05AM
Buckaroo Banzai never said that.
Also, I really feel like you're hitting the satire too hard, dude. I mean, you're seldom wrong, but dial it back a bit before you hurt yourself. We're all just a little worried about you is all.
Don't blame me, I voted for moose wang!
(Score: 3, Funny) by aristarchus on Friday March 25 2016, @07:23AM
Well, Buckaroo did say something analogous. That's all I'm saying. I mean, really, this far into a Microsoft AI, and you are worried about me hitting the satire too hard? If not now, then when? If not here, then where? And if not, not, not, . . . . My God, it's full of farts!!!
(Score: 2) by JeanCroix on Friday March 25 2016, @02:55PM
(Score: 2) by Tork on Friday March 25 2016, @07:08AM
🏳️🌈 Proud Ally 🏳️🌈
(Score: 3, Funny) by frojack on Friday March 25 2016, @07:32AM
Exactly.
It's not like the first time something Microsoft put on the web got owned you know.
If they left on line a few more days it would have morphed into Rick Ainsley.
No, you are mistaken. I've always had this sig.
(Score: 4, Interesting) by mcgrew on Friday March 25 2016, @01:50PM
So... it's Microsoft's fault for not installing a troll-filter.
Of course it is! I doubt there's a single person anywhere who doesn't know the internet is full of evil. Trolls, scammers, vandals, and they were brain-dead stupid for not taking measures.
Hell, I wrote a chatbot program way back in 1983 that ran in 16k of memory and no disk that responded to racial slurs and profanity with put downs of the person at the keyboard.
Carbon, The only element in the known universe to ever gain sentience
(Score: 2) by Tork on Friday March 25 2016, @03:51PM
🏳️🌈 Proud Ally 🏳️🌈
(Score: 2) by frojack on Friday March 25 2016, @05:31PM
Maybe they would be criticized, maybe not. I suspect not, especially if done right, or at least subtly, or a modicum of thought.
They could have simply had a bunch of words that would be ignored, not triggering a put-down or a comeback or a retaliation. Maybe Just label some of these words "bait" and let the bait-response routine do what ever.
Sort of like we expect our kids to respond. We caution our Offspring AI (authentic Intelligence) not to assume the attitudes or speech of every person they happen to bump into. Seems silly to unleash a program onto the wide woolly internet without even a modicum of caution.
No, you are mistaken. I've always had this sig.
(Score: 2) by mcgrew on Saturday March 26 2016, @02:53PM
I suspect not, especially if done right
When did Microsoft ever do anything right??
Carbon, The only element in the known universe to ever gain sentience
(Score: 2) by Gravis on Friday March 25 2016, @10:12AM
SA themed memes.
uhh... South American themed memes???
(Score: 0) by Anonymous Coward on Friday March 25 2016, @03:49PM
I'm reminded of something some 10 years or so ago when the Something Awful crowd poisoned a chatbot to respond with racist responses/SA themed memes.
uhh... South American themed memes???
SA in this context is short for Something Awful; the crowd at the Something Awful website often shorten the site name to SA. Not obvious, so I understand your confusion.