Microsoft has now apologized for the offensive turn its Tay chatbot took within hours of being unleashed on Twitter. In a blog post, corporate vice president of Microsoft Research Peter Lee said that the company is "deeply sorry" for Tay's offensive tweets, and it will only bring the chatbot back once the issues that caused Tay's turn in the first place:
As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.
Lee goes on to note that Tay is actually the second AI it has released to the public following the release of one named XiaoIce in China. XiaoIce, Lee says, is being used by around 40 million people in China, and Tay was an attempt to see how this type of AI would adapt to a different cultural environment.
According to Lee, the team behind Tay stress-tested the chatbot to look for exploits before it was released to the public. However, the team apparently overlooked the specific vulnerability that caused the chatbot to repeat various racist and offensive ideas and statements from some bad actors.