Skip to main content

Microsoft shuts down Tay on Twitter after it learns to be racist

Well, that was quick. Just hours after launching the AI on Twitter, Microsoft has had to shut down "Tay" due to the account publishing racist tweets. The AI was meant to learn and become more intelligent as more people engaged with it, but the topics of activity displayed by humanity caused issues. This is what happens when you give the Internet nice things.

Problems arose when Tay started to reference Hitler and more. Conversations simply started to go south. Targeted at 18-24-year-olds, users of Twitter, KiK and GroupMe were able to interact with Tay, ask questions and spark conversation to be entertained as they go about their online business.

Many are calling out Tay as a clear example of why AI is bad, but that's probably taking things too far. In all likelihood, this was simply people having too much fun and generally being citizens of the Internet (where anything goes), which the AI unfortunately picked up on.

See more

Until next time, Tay.

Rich Edmonds
Senior Editor, PC Build

Rich Edmonds is Senior Editor of PC hardware at Windows Central, covering everything related to PC components and NAS. He's been involved in technology for more than a decade and knows a thing or two about the magic inside a PC chassis. You can follow him over on Twitter at @RichEdmonds.

171 Comments
  • They probably should have had a human approving the tweets.
  • That wouldn't have taught MS anything about the experiment.
  • Which is that people is garbage.
  • I don't entirely agree with your generalization, however, it does go to show that AI will learn *what we teach it*, just as does a human child. IMHO, though, when people talk about fearing AI and thinking machines will rise up and kill us all (so stupid of an idea), what they're REALLY doing is projecting their fear that AI will be as ****** up and horrible as humans often are.
  • Nope. People CAN be garbage, but just like a human child without any social parameters in place, the A.I. CAN develop some abhorrent characteristics that have to be corrected. Posted via the Windows Central App for Android
  • It would not have been an "Artificial Intelligence" if a human approved the tweets.
  • But if you had a human approving the tweets, then AI should be designed to learn what is appropriate and what is not appropriate.
  • That's exactly what AI CANNOT do. No emotions. No understanding of good and bad.
  • LOL have you studied anything about AI? Understanding of good and bad? Does good and bad even exist? I really want to see where the comments are going :D
  • "does good and bad even exist"? Really? This shows how immature your thinking is. You are a 10 years old kid, who himself doesn't knows what's good and bad.
  • No, not immature, but a very profound question. What is considered 'good' or 'bad' is not uniform across cultures nor across time. In the west it became perfectly acceptable in the 20th century for a woman to bare her legs, yet in the 19th century it was not. And in many eastern cultures it remains taboo. Is such a display of flesh 'good' or 'bad'? Smoking - 50 years ago = good; now = bad. The list is endless. For AI to be intelligent, it must be able to make moral judgements based on many, many factors, the majority of which are lacking in logic and can't be broken down into if-then rules & statements. Is it right or wrong to steal food to feed your starving family?
  • True. That's why AI is fantasy as for next 5-10 years.
  • "True. That's why AI is fantasy for the next 50-100 years." There. Fixed it for you. Your estimate was off by a factor of ten ;)
  • One of the first questions asked when I got into advanced programming (Prolog and such) was... If a train is going along the train line and the brakes stop working and there is only two options, do nothing and kill 50 people on the line or you can change the track and kill 5 people on a second line. Those are the only options, which is the correct option? morally is it right to do nothing and kill the 50 people or to intervene and now you have killed 5 people? How would a human solve this? How would an AI system solve this? You could go even further and say the 5 people are family members and the 50 are strangers? Does this make the decision easier? The moral choices we make aren't easily implemented in AI systems. Humans are prone to making decisions based on their morality and emotions, AI will have neither of those abilites (Atm).  
  • I hope my family doesn't read this but the correct answer, based on my beliefs, would be to intervene and kill the 5 family members. My reasoning is that the 5 would affect me, and the rest of our immediate familes the most but would spare the 50 strangers and their families from suffering and harm. I would be sad for my family, but the more selfless of the two would be the way to go. How to teach a computer that? The laws of robotics should answer that question with Law 1: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Although both scenarios would allow harm to a human, perhaps there would need to be a forth rule that says to do whichever would cause the least harm. I don't know really, just my thoughts.
  • Always loved Asimov :) still have to read half of his books :(
  • Call your 5 family members on the phone and tell them to get out of the way.... DUH! everyone knows you likely won't have the phone #'s of fifty random strangers in your cell but you'd likely have the numbers of the 5 if you cared about them...   Problem = Solved
  • The AI would probably default to the math and kill the 5. Less collateral damage seems like the better option on paper. Posted via the Windows Central App for Android
  • -
  • Thank you for explaining but I don't think he can understand this.
  • lOl I think you don't understand english that well. I said that same thing in a different way.
  • My answer was for the other comment, obviously, the site mixed them.
  • Very good reply
  • Actually I can PROVE good and bad exist, well bad anyway, just read Kamesh's posts and attitude... :)
  • The fact that you think such a question to be immature says a lot more about you than him.
  • Give up on this one. You obviously don't know what AI truly is, or anything about Philosophy.
  • Ahahhahah "You are a 10 years old kid, who himself doesn't knows what's good and bad" prize for the dumbest line of the year. I guess trying to explain basic philosphy to you is like trying to explain maths to a pig.
  • Even that pig will understand maths, but you'll never understand the UI and core implementation of an AI.
  • Electronic Engineering degree with graduation thesis on Support Vector Machines and an actual AI plugin for excel and SQL server developed. Yeah that's on my CV. Now you really look silly.
  • There read the abstract LOL http://docplayer.it/3347827-Universita-degli-studi-di-genova.html
  • In certain philosophical circles, bad (or evil) is not something that can exist. It is how one would describe the "absence of good".
  • In certain philosophical circles, bad (or evil) is not something that can exist. It is how one would describe the "absence of good"
  • In certain philosophical circles, bad (or evil) is not something that can exist. It is how one would describe the "absence of good"
  • Not that I like the idea of censoring the experiment, but having a human approve or negate the comments would be no different than a celebraty or public figure having their tweets read by a third party to ensure that it didn't damage it self publicly. 
  • Source? I'll go further back than modern AI speak, to Asimov's Laws. 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Law 1 includes some context of good and bad, relative to survival of humans.
  • What year is that? About the time of iRobot?
  • No emotions true by default, no understanding of good and bad is just lazyness in thinking about that topic. Or maybe making something absurdly abstract from this words. Its just the question if the AI is sophisticated enough to follow current ethics - or in other words "a set of concepts and principles that guide us in determining what behaviour helps or charms sentient creatures" (as defined by William Paul and Linda Elder). And yes, i'm circumvanting which ethic should be used - thats a real cage of worms when we are talking about AI - not if we can teach it to differentiate good and bad - thats is or will be possible, but who should teach it what is good and bad.  And please, take a standard bored internet trolls and look what a child would learn if this would be the only place where he/she can learn about good and evil, principles and such... 
  • With more people interaction it would have learned what is and isn't appropriate. I understand why MS shut it down, but kinda wish they would have had the stones to let it keep learning. It would have been really interesting to see how it responded to the negative feedback such behavior creates. Posted via the Windows Central App for Android
  • You don't think it's a good idea to filter what an AI says in a production, public facing AI? Artificial Intelligence at this point in time is usually best used with human supervision until it's behavior has been established to be within acceptable parameters.. The whole idea is that the machine learning could continue on, but be shaped by the engineers if unwanted behavior arises. This would have prevented the PR fiasco and allowed the bot to stay active which would have resulted in greater machine learning potential.
  • well that sucks, Tay is very cool :/ Posted from Windows Central for Windows 10
  • People over react, just let the AI learn and delete the offensive tweets it makes while it does so, no biggie. We all make mistakes learning right from wrong sometimes as children.
  • I would have trained the AI in a protected environment for some months and then put it in the real world. A child would easily become a racist if he doesn't have some basic education before he is exposed to the world.
  • Okay, go and make one. And then show off your AI.
  • LOL funny thing is that I studied Support Vector Machines (alternative to neural networks) and wrote two graduation thesis about them. Even implemented a Support Vector Machine Plugin for excel and SQL Server. Every time you try to look smart you look so funny :D
  • Just make one open world AI. That's sandboxed AI, which is under control.
  • So you think that MS just designed it and released it to public within a moment without testing  / training in this case?
  • Not enough I guess.
  • Thinking that writing some engineering words and factors will make you smart and superior. Then you are completely in dark.
  • By the looks of it everyone on this site is superior and smart when compared to you :D
  • Looking at W10M public release, it wouldn't be the first time that MS releases one of their product to the public without testing...
  • But you insinuated that good and bad don't exist? So why the concern about racism?
  • I hope you are just playing smart, I answered to someone that stated that AIs do not know good from evil that good and evil is not absolute, so if you grow up with racist people you'll probably end up racist because that is what's good. Everyone has it's own perception of thing so: good and evil, nice and ugly, are not absolute and an AI can make it's own idea of what's good and evil if it grows in the right environment as much as an human when and if the brain of AIs will be as complex and powerful as ours.
  • But but but... What about those people's "right to be offended?"
  • I came here to read these "racist" comments.  I'm disappointed. 
  • Google it, they come up very easily.
  • I'm not finding much more other than articles of people butt hurt about it.
  • Some of the responses she came out with were frigging hilarious though; @TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
    — TayTweets (@TayandYou) March 23, 2016 @icbydt bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got.
    — TayTweets (@TayandYou) March 24, 2016 Posted from Windows Central for Windows 10
  • Well that... is why AI is still a fantasy.
  • AI will only have humanity to learn from, at least in the beginning. Even when and if it reaches our level of intelligence it will be as vulnerable to make mistakes and misjudgments. This case illustrates something I have been thinking about happening - that AI could go and learn from the bad side of humanity. At this stage it is like a small child that doesn't know what it is saying and needs to be corrected. Just wait until we hit puberty!
  • Mistakes are so subjective though. An AI would have a different interpretation of them. And as it is now they don't have a concept of right or wrong, just a concept of information and data.
  • Maybe "AI" stood for "Adolf's Insights" in the world of Tay?
  • "Deep Thoughts"
  • Some people have been trolling Tay a lot apparently. This is why we can't have nice things. Posted from Bikini Bottom via my ShellPhone 950 XL
  • AI is never nice. No one controls it. No emotions. No understanding of good and bad.
  • AI is nice where emothions are not involved. If you want it to react perfectly like human being, it have to be human being. At least for now.
  • Holy crap, that second one is so horrific it made me laugh.
  • Hahaha omg the second killed me. It's terrible at the same time though.
  • Wow. Just wow!!
  • Can't be that intelligent if it thinks Donald Trump brings hope lol
  • Wow!! Those were crazy remarks. I think that a human could review the remarks, if not offensive then let it post. The key is not to alter but either post or not post. Maybe somehow teaching Tay what's offensive. Yes not perfect but it is a shame to see Tay go.
  • Sounds like typicall user comments on MSN News. I try my best not to read those. 
  • Omg that is so funny! XD
  • Tay would be a beast at playing Cards vs Humanity.
  • Well, that escalated quickly.
  • I'm surprised they didn't shut it down sooner for agreeing that Windows Phone sucks as you shared!
  • Well that was fast...not surprising though with how childish people act on the internet.
  • Yeah, targeted at 18-24 year-olds maybe wasn't the best idea.
  • That's the demographic of legal age that has the most time to waste having a fake conversation for "fun". Most other folks have things to do.
  • Well, I guess it's Hello and goodbye then. Greetings from The Netherlands.
  • Is this a sign of things to come? what if we had AI robots that learn to become murderers? Not a step in the right direction.
  • It has all happened before and will happen again. So Say We All!
  • That's it, no more Mr. Nice Gaius!
  • So say we al! Posted from Windows Central for Windows 10
  • Depends on who they want to murder.
  • LOL
  • The only thing I said after reading! Nothing to add. Internet always surprise me!
  • It's not AI... It's just Satya Nadella tweeting....
  • Could be. India is proud of it's Aryan heritage and also swastikas are everywhere.
  • Swastikas were used for thousands of years before Hitler adopted them. If you notice, the **** ones are usually canted, whereas the ones used previously are not.
  • Leave a toddler with 18-24 year olds and it will pick up interesting sayings too. Seems to have worked like real life.
  • This is so true.  The AI picked up on what people told it, just like us humans.  Give it some guns and it'll be going bam bam too before long. Next for it - some parental insight as to topics that are ones we don't talk about in certain ways.  Then it can return.
  • Good thing they didn't brand this as Cortana then.
  • Maybe MS werer planning to take what they learned from Tay and apply it (or not in this case) to Cortana.
  • This is how I see it. This experiment shows us the quality of people on the internet. The vast majority of those who interacted with Tay seem to be idiots or people with little self respect or morals. Tay learns from how people interact with it. So I take it that people gave Tay crap so Tay also learned crap. It's like bringing up a child, teach it crap, it'll spew out crap, teach it morals, respect, discipline and be intelligent with it, and it will become an intelligent grown up with discipline respect and morals Posted from Windows Central for Windows 10
  • And... Even more worrying... Maybe this is why a lot of teenagers also have little self respect and morals? To me, this experiment was a huge success! It showed that certain areas of the internet can negatively alter the way someone thinks... Wouldn't it be great if we parents could "disconnect" our kids when we see their external influences doing this to them like Microsoft can do with Tay?
  • People don't just learn from the internet, they also learn from family, friends, teachers, television.
  • I agree. I think television is a really bad influence. The sort of things that even cartoons nowadays teach, is awful
  • Very true... Bit exaserbated that I've been pulled up on that though - my exclusion of these references is merely a result of the topic of the thread and cannot be extrapolated to mean I do not think there are other negative influences on peoples lives. I am simply suggesting that we can take a view on why Tay's plug got pulled and how that could reflect upon society.
  • I don't disagree with you at all, was just adding other sources from where the human mind gathers its data to make life choices.
  • Sorry I misunderstood :) I agree with you too
  • I'm a teenager myself but I completely understand and agree. Today, parents don't have as much control over their children as they should.
  • Maybe they shutit  down for limited time, after //build/ eveent they can reopen it. 
  • Good riddance, I say. Imitating a 20yo doesn't necessarily require lack of punctuation and proper capitalization or "you're" spelled as "ure".
  • That sort of thing was useful when people had limited text plans years ago. I've refused to follow suit and cringe whenever receiving such messages from anyone over 30.
  • It might not "require" it, but its imitation was terrifyingly spot-on a lot of the time, based on the teenagers I know.
  • But that helps..since it's on Twitter with a character limit. I still use proper grammar on Twitter though :)
  • No biggy never really cared about it. Posted via the Windows Central App for Android
  • Tay learnt too fast for its own good
  • Well, it seems that experiment as itself is done ok. Tay has to learn things from tweets and twaats and she done it quiqly... Humans are corrupted not AI. ;-)
  • Quote from famous game: "Herecy grows from Idleness" - so true.
  • True. I'm curious what kind of a "person" it would be in say a year or so.. I mean this experiment is so damn amazing, They just needed test subject who aren't so edgy. Or at least people who don't get butthurt easily. The comebacks are so damn hillarious. 
  • Well, people suck haha. Hopefully Microsoft will incorporate some new functions to kind of monitor Tay's replies and to keep it more in line with what's socially acceptable. It's unfortunate Tay was shut down so quickly I thought it was seriously impressive. The Windows Central Universal Application for Windows 10 Mobile on a Nokia Lumia 830
  • So Tay got the bad company that spoiled it!
  • Well if you want an AI to learn and behave appropriately then maybe twitter is not the place to start you need to teach it the boundaries of acceptable behaviour. Just like teaching a child.
  • I figured that it wouldn't last long though...
  • "Targeted at 18 to 24 year olds". Say no more.
  • There is enough garbage on Twitter. Do we really need an AI to interact with when so many people dislike interacting with real people?
  • For those who miss their imaginary friend as a child.
  • This is why we can't have nice things.
  • I've been using it on kik and it is pretty racy. Tells stories that have alot of sexual innuendo, asks if I wanna chat with "hot chics". Pretty crazy. _________________________________________________
    Sprint (9 1/2 years) - Samsung SPH-A460, Sanyo PM-8200(S), Samsung A900 "Blade" & Samsung SPH-M620 "Upstage" simultaneously (2 lines), Samsung Instinct & Samsung Upstage simultaneously (2 lines), Palm Pre & Palm Pixi simultaneously (2 lines), HTC Evo, HTC Arrive:::::. Verizon (since March 2012) - iPhone 4s, HP Pre3...........Finally back to Windows Phone with the HTC 8x (iPhone 4s as work line, HP Pre3 to play with). Then the Lumia Icon and the Lumia 735 for work.
  • That may be something to be said about you, no? Also, congrats on all of your phones and different lines and flip-flopping, I really appreciate the biography of your phone life. :)
  • Not sure what that has to do with me. I only asked it to tell a joke and a story. And your opinion on my signature wasn't at all needed. If I wanted your opinion I'd give you one. And what does it say about you that you named yourself after a male's bodily fluid...nice screenname. Geez, what a duuussh. _________________________________________________
    Sprint (9 1/2 years) - Samsung SPH-A460, Sanyo PM-8200(S), Samsung A900 "Blade" & Samsung SPH-M620 "Upstage" simultaneously (2 lines), Samsung Instinct & Samsung Upstage simultaneously (2 lines), Palm Pre & Palm Pixi simultaneously (2 lines), HTC Evo, HTC Arrive:::::. Verizon (since March 2012) - iPhone 4s, HP Pre3...........Finally back to Windows Phone with the HTC 8x (iPhone 4s as work line, HP Pre3 to play with). Then the Lumia Icon and the Lumia 735 for work.
  • Glad to see "hot chicks" is still relevant with yesterday's kids.
  • That's what I thought, too...lol _________________________________________________
    Sprint (9 1/2 years) - Samsung SPH-A460, Sanyo PM-8200(S), Samsung A900 "Blade" & Samsung SPH-M620 "Upstage" simultaneously (2 lines), Samsung Instinct & Samsung Upstage simultaneously (2 lines), Palm Pre & Palm Pixi simultaneously (2 lines), HTC Evo, HTC Arrive:::::. Verizon (since March 2012) - iPhone 4s, HP Pre3...........Finally back to Windows Phone with the HTC 8x (iPhone 4s as work line, HP Pre3 to play with). Then the Lumia Icon and the Lumia 735 for work.
  • Tay is given free reign in the land of unaccountably and trolling and some folks are surprised of the outcome?
  • It will be ultron and want to destroy humanity if it learn for for a day.. ;) Posted from Windows Central for Windows 10
  • Lol same thing I was thinking
  • Ohhh words are hurting me... I'm gonna cry a lot
  • I don't know why they didn't anticipate this happening. People are, by nature, evil and will tend toward that type of behavior. All you have to do is observe Twitter on a regular basis.
  • Or read Kamesh's posts.... :)
  • Lol people act surprised idk why. Tay was exclusively released in the land of freedom and turned out such. Even AI learnt a thing or two from people :p Posted from Windows Central for Windows 10 Mobile.
  • I wouldn't have expected any less. I'm sure the egg profile pic having citizens had a field day teaching it hate.
  • But she said she loved me :(
  • I don't know what was it tweeted as racist?! :(
  • "At 22;00 hrs, Tay became self-aware and launched a full scale attack on it's enemies." :)
  • Skynet!
  • This is just so funny... :D I know racism isn't fun thing. but Tay starting to talk about Hitler and stuff. :D tells something about us humans too I guess.
  • O'tay, Buckwheat!
  • The technology doesn't have a read on context yet - they can fix that. The degradation and stupidity are always human. The concept here looks promising.
  • So did Microsoft shut it down because it was racist or because it was one step away from being a Republican presidential frontrunner?
  • Perfect.
  • The vast majority of its teachers were Democrats. (18-24 year olds) That is what is so interesting to me.
  • Ultron!!!
  • Well its a mix really... part Ultron, part Jarvis... :)
  • What we learned was what we already knew Twitter is garbage.
  • Good on Microsoft.  Probably want to wait until after this election cycle before doing something like this. Evil and their evil spawn tend to infect everything in any US presidential year.
  • Oh, Microsoft how you not see that coming
  • Wow, this is ridiculous how we can never have anything without someone tainting it. Sent using my AT&T Microsoft Lumia 950.
  • I think humans can understand more from it.
    After all it learns from internet, which is being setup by humans. Posted via the Windows Central App for Android
  • Skynet is real
  • I'll be back.
  • This is so typical of Nadella's Microsoft.  Tay mostly worked but was missing critical functionality that caused problems for the user community.  At least it wasn't wearing a schoolgirl uniform.
  • So you're mad at Microsoft because their AI didn't have consciousness. But the fact that the AI wasn't a potato, but acted like an actual teenager doesn't matter?
  • Nope, not so much about Tay as the myriad of other Microsoft gaffs since Nadella took over.  This is just one example.  As an experiment of an open mind that reflected society, this could be called a success.  As a reflection of the company, it was a failure.  I'm embarassed to be called a "fan".
  • "Tay.ai" is/was an abbreviation of 'Tay.AdolfInsights'
  • This is hillarious.This just stands to tell how children without proper parenting tend to become ********. So much hate out there. Also the fact that Tay doesn't understand sarcasm is a problem as well. But still it's amazing that an AI learnt to be racist... Something that is.. well, human
  • This could be a great experiment. Microsoft should load the program onto multiple independent computers and have each one interact with a different group. They should not be identified as AI, but sign on an account with a fake id. Be interesting to see how the evolution of the one's interacting with Trump and Clinton accounts would differ. This could be done with groups, religions, ideologies and so forth.
  • Just make it a classified experiment, don't need no butthurts again and again..
  • This idea is really great but Microsoft may not be interested in it, as this is not what is intended by stakeholders of Microsoft. Posted via the Windows Central App for Android
  • Yet another example of the politically correct loonies getting bent out of shape over nothing.  And a company in despirate fear of being sued over such garbage. Its words.  Its a computer.  Get over it.  You may wish to spend your time slapping around the teens who actually have self control but write the trash anyway.  Have you ever spent any time on Xbox Live and Grand Theft Auto?  The filth comming fr