Microsoft has a "unique" history with chatbots. The latest such 'bot built by Microsoft staffers hides behind fake online ads for sex and then calls out and scolds offending parties.

From a Wired.com story on the subject:

The chatbot, tested recently in Seattle, Atlanta, and Washington D.C., lurks behind fake online ads for sex posted by nonprofits working to combat human trafficking, and responds to text messages sent to the number listed. The software initially pretends to be the person in the ad, and can converse about its purported age, body, fetish services, and pricing. But if a would-be buyer signals an intent to purchase sex, the bot pivots sharply into a stern message … Microsoft employees built the bot in a philanthropic initiative called Project Intercept, in collaboration with nonprofits that hope it can reduce demand for sex workers, and the incentives for criminals to coerce people into the sex trade. The technology is not a product of Microsoft itself.

The introduction of this chatbot from Microsoft workers follows some other notable mishaps with chatbots by the software giant. In March 2016, Microsoft launched a social 'bot called Tay that was created to give users "casual and playful conversation." However, Tay quickly started spewing offensive terms, and Microsoft eventually issued an official apology.

More recently, Microsoft announced the Ruuh chatbot, though it was only available in India, as well as the Zo 'bot. Then last month, Zo awkwardly identified Windows OS as "spyware." Oops.

The goal of the new chatbot is an admirable one. Let's just hope these Microsoft staffers have more luck with it than the company has in the past.