bookmark

What went wrong with Tay, the Twitter bot that turned racist? | Opinosis Analytics


Description

So, what’s this Twitter bot thing?

A Twitter bot is essentially a Twitter account controlled by software automation rather than an actual human. It is programmed to behave like regular Twitter accounts, liking Tweets, retweeting, and engaging with other accounts.

Twitter bots can be helpful for specific use cases, such as sending out critical alerts and announcements. On the flip side, they can also be used for nefarious purposes, such as starting a disinformation campaign. These bots can also turn nefarious when “programmed” incorrectly.

This is what happened with Tay, an AI Twitter bot from 2016.

Tay was an experiment at the intersection of ML, NLP, and social networks. She had the capacity to Tweet her “thoughts” and engage with her growing number of followers. While other chatbots in the past, such as Eliza, conducted conversations using narrow scripts, Tay was designed to learn more about language over time from its environment, allowing her to have conversations about any topic.

In the beginning, Tay engaged harmlessly with her followers with benign Tweets. However, after a few hours, Tay started tweeting highly offensive things, and as a result, she was shut down just sixteen hours after her launch.

You may wonder how can such an “error” happen so publicly. Wasn’t this bot tested? Weren’t the researchers aware that this bot was an evil, racist bot before releasing it?

These are valid questions. To get into the crux of what went wrong, let’s study some of the problems in detail and try to learn from them. This will help us all see how to handle similar challenges when deploying AI in our organizations

Preview

Tags

Users

  • @dollyk

Comments and Reviews