Well, that was quick. Just hours after launching the AI on Twitter, Microsoft has had to shut down "Tay" due to the account publishing racist tweets. The AI was meant to learn and become more intelligent as more people engaged with it, but the topics of activity displayed by humanity caused issues. This is what happens when you give the Internet nice things.

Problems arose when Tay started to reference Hitler and more. Conversations simply started to go south. Targeted at 18-24-year-olds, users of Twitter, KiK and GroupMe were able to interact with Tay, ask questions and spark conversation to be entertained as they go about their online business.

Many are calling out Tay as a clear example of why AI is bad, but that's probably taking things too far. In all likelihood, this was simply people having too much fun and generally being citizens of the Internet (where anything goes), which the AI unfortunately picked up on.

Until next time, Tay.