The company also issues apology for racist tweets

Mar 25, 2016 23:40 GMT  ·  By
Microsoft says its chatbot should no longer speak evil when it comes back online
   Microsoft says its chatbot should no longer speak evil when it comes back online

Tay is the latest Microsoft experiment gone wrong despite the early signs of success that made the company believe that everyone would love to have a conversation with a chatbot.

In fact, everyone seemed to like talking to Tay, but only until she started being rude and tweeting racist and pro-Hitler messages that took everyone by surprise.

Microsoft reacted quite quickly and took Tay offline, blaming a “coordinated attack by a subset of people” for the unexpected tweets.

Microsoft: We’re sorry we didn’t see this coming

In a post today, Peter Lee, Corporate Vice President, Microsoft Research, has explained that Microsoft indeed tested Tay before the public release and even implemented several filters, but what happened was only possible because of a vulnerability in the service.

“As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience,” he says.

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

At this point, Tay is still offline, and Microsoft says that it’s currently working on tweaking it in a way that would prevent such racist posts from being published again. More effective filters are being implemented, and Tay should be a friendly person because it will no longer learn rude remarks from users it will interact with.

“We take full responsibility for not seeing this possibility ahead of time,” Lee said, without providing any details on when exactly the chatbot could come back offline. Without a doubt, it’ll be interesting to see how the company manages to improve the chatbot and prevent such blunders from happening again because users out there have always found the most unexpected ways to get around filters.