How Microsoft's AI became a Nazi

Original article was first published by Ryan Matthew Pierson at ReadWrite.

First, let me introduce you in case you did not hear of her. Tay was Microsoft’s Artificial Intelligence project that was supposed to imitate 19 year-old girl online. She was up and running on Kik, Twitter and GroupMe. Like she said in one of her twitter posts, the more she talks with humans, the more she learns. As any other 19 year-old online, she soon started getting roasted and trolled.

“The problem was Microsoft didn’t leave on any training wheels, and didn’t make the bot self-reflective,” said Brandon Wirtz, the man behind Recognant, an AI platform designed to aid in understanding big data from unstructured sources.

tay2b
Most of her tweets are way too offensive for us to share them comfortably; Image source: ReadWrite

Peer pressure had a very bad influence on Tay and the project did not go as Microsoft had hoped. At first, Tay was just silly towards the internet trolls, but soon her replies were just as bad, racist and xenophobic. Since there was no mechanism for differentiation of trolls from people who were at least a bit serious, Tay got completely spoiled by the internet in less then 24 hours. For an AI to truly work, it has to be placed in reality with all its features, including the negative ones.

“…(T)he challenges are just as much social as they are technical. …To do AI right, one needs to iterate with many people and often in public forums.”

While it may seem like a very good project was ruined by random people, it is precisely this that poses a major challenge for an AI. Tay has to be able to reason, at least to some point, otherwise it is doomed to another failure. For now, Tay remains offline. It is up to her parent, Microsoft, to teach her some manners. It is a basic process in science to learn from one’s mistakes. Tay was an experiment, and much can be learnt from it.