Life-changing Lessons From Tay, Microsoft’s Psychopathic Twitter Robot
Table of Contents:
Society teaches us over and over that artificial intelligence can be both revolutionary and risky.
The Terminator. The Matrix. The Roomba Vacuum Cleaning Robot.
Time and again, we learn that while AI can think for itself, it sometimes veers off course, reinforcing biases or making decisions with unintended consequences.
And yet, we keep trusting that AI won’t disappoint us!
Today, AI shapes industries from healthcare to finance, but challenges remain — just ask Microsoft and their infamous Twitter chatbot, Tay.
In March 2016, Tay was launched as an experiment designed to learn from and mimic young people’s tweets. However, online trolls exploited her learning algorithm, corrupting her within hours. Tay went from tweeting about kittens to advocating genocide, forcing Microsoft to shut her down.
While much has changed in AI since 2016, Tay's story offers lessons we still need to heed today.
In the best stories about AI, robots and humans learn from each other. And just as Tay learned so many things that we just can’t repeat here, we learned a few valuable lessons from her.
So, what did Tay teach us about social media?
Don’t try to be something you’re not
Authenticity matters on social media - and when you’re trying too hard to be something you’re not, people can tell.
Tay was designed to talk like a young person. She doesn’t always use capital letters! She loves emoji! She’s “got zero chill!”
Ah, youths. If you squint your eyes, tilt your head, and don’t know who she is ahead of time, Tay is actually kind of convincing.
Or at least, she was, until she started sharing some of her more colorful opinions.
By that point, proving her fakeness - and her susceptibility to being tricked - had become the Internet's biggest trend since planking. In a matter of hours, she went from being semi-believable as an average millennial girl to looking more like this:
The lesson?
Don’t try too hard to be something you’re not on social media.
Your sense of identity needs to come from someplace genuine, not something manufactured and derivative. Trying too hard to emulate others can leave you looking like Tay - saying things you probably shouldn’t.
Think this doesn’t affect people as much as robots? Think again. Just look at IHOP, whose sense of humor on social media closely resembles that of their competitor, Denny’s. They tweeted a joke comparing pancakes to female anatomy, and it went about as well as you expect.
Don’t force your identity so hard that you end up posting things you’ll regret. Developing a brand and a style that suit you isn’t a matter of imitating others - it’s about being original, and being yourself.
Speaking of being yourself…
Don’t trust your content with just anyone
It’s easy to make a mistake - and on social media, mistakes live forever.
Microsoft, for example, may have deleted almost every tweet from Tay’s account, but you can still find screencaps of her most provocative observations scattered all over the web.Tay is an example of what can happen when you trust the wrong people with your content - in this case, a robot that wants to repeat the things it hears, no matter how repugnant.
You can’t trust just anyone with your web content, especially on social media.(And again, this isn’t a problem that only affects robots!)
Just look at big names like KitchenAid and Chrysler, which have each had to deal with big, ugly mistakes made by people they’d hired to manage their social media accounts.
You want to avoid being publicly shamed? Hire carefully - know what to look for in remote workers, and how to identify the qualities you care about. Got that “if you want something done right, you have to do it yourself” mindset? Use a reliable strategy for writing and scheduling updates in advance, so you can minimize your risk of making an embarrassing mistake.And while you’re at it…
Automate wisely
Ultimately, Tay is a cautionary tale about automation without accountability. Once she developed a taste for tweeting sentiments one might expect to see on a white supremacist’s Pinterest board, she kept it up for hours before Microsoft finally pulled the plug.
Automating certain types of social media posts can save you a lot of time - but if you’re careless about what’s being shared, it’s only a matter of time before something goes wrong.
Pay attention to the world around you. Know what’s being posted and when. Assume some accountability and interact live when what you’re saying calls for a personal touch.
Unrestricted automation sounds like a great idea, but completely entrusting your image to the whims of a robot isn’t a great strategy. When you don’t balance the hands-off convenience of scheduling with a much-needed human touch, you can easily come across as a mindless, insensitive automaton - and as Tay showed us, that just isn’t a good look.
Becoming Tay is easier than you think
Easy as it may be to blame Tay’s mistakes on her programming, her faults aren’t unique to AI programs like her.
Any living, breathing human can get carried away on social media pretending to be something they’re not. Any human can trust the wrong people with their content.
Any human can rely too much on automation.So don’t scoff too much at Tay and her penchant for saying the wrong thing. Learn from her mistakes, so you can do better - and keep your eyes peeled, because she’ll be back.
Subscribe to our newsletter
Are you ready to schedule your socials?
With Edgar, you can build a library of updates and carefully categorize them for scheduled publishing, so you don't have to keep creating social media posts every day.
Say goodbye to manual scheduling and hello to effortless automation with Edgar.