Microsoft's U.S. launch of its Tay chatbot via Twitter earlier this week quickly went out of control, as Tay's AI program started making racist posts. Microsoft shut down the chatbot and now the company is blaming its posts as part of a 'coordinated effort by some users' to take over Tay's converations.
Tay was designed to be used by the 18-24 year old audience for fun chats and games. Microsoft Research designed the chatbot to learn more ways to answer questions the more it interacted with humans. In a statement to Buzzfeed, Microsoft seems to believe that a small group of folks decided to take that learning program and make Tay post offensive answers:
"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical," a Microsoft spokesperson told BuzzFeed News in an email. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."
The big question: Why didn't Microsoft launch Tay with these "adjustments" already in place to prevent these kinds of racist statements?