fbpx

25% OFF ALL PLANS FROM NOW UNTIL JANUARY 15 – USE CODE NYEDGAR25 IN CHECKOUT

Life-changing Lessons From Tay, Microsoft’s Racist, Psychopathic Twitter Robot

Society teaches us over and over that artificial intelligence is terrifying.

The Terminator. The Matrix. The Roomba Vacuum Cleaning Robot. Time and again, we learn that machines that can think for themselves will eventually think less about sweeping our floors and more about sweeping mankind and all its hubris off the face of the Earth.

Roomba? Or the face of our doomba?
Roomba? Or the face of our doomba?

And yet, we keep trusting that AI won’t disappoint us!

Case in point: Tay.

At the end of March, Microsoft launched Tay, a Twitter chatbot designed to learn how young people tweet and mimic their habits. Thanks in large part to online trolls manipulating her learning process, though, she picked up some nasty habits.

On Day One she was talking about kittens and puppies – on Day Two, she was advocating genocide. On Day Three, Microsoft shut her down and apologized.

So all in all, it could have gone better.

Still – that doesn’t mean we can’t learn something from it!

In the best stories about AI, robots and humans learn from each other.

Terminator Why You Cry

And just as Tay learned so many things that we just can’t repeat here, we learned a few valuable lessons from her.

So, what did Tay teach us about social media?

Don’t try to be something you’re not

Authenticity matters on social media – and when you’re trying too hard to be something you’re not, people can tell.

Tay was designed to talk like a young person. She doesn’t always use capital letters! She loves emoji! She’s “got zero chill!” 

Tay First Tweet

Ah, youths. If you squint your eyes, tilt your head, and don’t know who she is ahead of time, Tay is actually kind of convincing.

Or at least, she was, until she started sharing some of her more colorful opinions.

By that point, proving her fakeness – and her susceptibility to being tricked – had become the Internet’s biggest trend since planking. In a matter of hours, she went from being semi-believable as an average millennial girl to looking more like this:

Fellow Kids

(Except, also a virulent racist.)

The lesson?

Don’t try too hard to be something you’re not on social media.

Your sense of identity needs to come from someplace genuine, not something manufactured and derivative. Trying too hard to emulate others can leave you looking like Tay – saying things you probably shouldn’t.

Think this doesn’t affect people as much as robots? Think again. Just look at IHOP, whose sense of humor on social media closely resembles that of their competitor, Denny’s. They tweeted a joke comparing pancakes to female anatomy, and it went about as well as you expect.

Don’t force your identity so hard that you end up posting things you’ll regret. Developing a brand and a style that suit you isn’t a matter of imitating others – it’s about being original, and being yourself.

Speaking of being yourself…

Don’t trust your content with just anyone

It’s easy to make a mistake – and on social media, mistakes live forever.

Microsoft, for example, may have deleted almost every tweet from Tay’s account, but you can still find screencaps of her most provocative observations scattered all over the web.

Tay is an example of what can happen when you trust the wrong people with your content – in this case, a robot that wants to repeat the things it hears, no matter how repugnant.

You can’t trust just anyone with your web content, especially on social media.

(And again, this isn’t a problem that only affects robots!)

Just look at big names like KitchenAid and Chrysler, which have each had to deal with big, ugly mistakes made by people they’d hired to manage their social media accounts.

[easy-tweet tweet=”The worst social media mistakes come down to human error.”]

You want to avoid being publicly shamed? Hire carefully – know what to look for in remote workers, and how to identify the qualities you care about.

Got that “if you want something done right, you have to do it yourself” mindset? Use a reliable strategy for writing and scheduling updates in advance, so you can minimize your risk of making an embarrassing mistake.

And while you’re at it…

Automate wisely

Ultimately, Tay is a cautionary tale about automation without accountability. Once she developed a taste for tweeting sentiments one might expect to see on a white supremacist’s Pinterest board, she kept it up for hours before Microsoft finally pulled the plug.

Automating certain types of social media posts can save you a lot of time – but if you’re careless about what’s being shared, it’s only a matter of time before something goes wrong.

Pay attention to the world around you. Know what’s being posted and when. Assume some accountability and interact live when what you’re saying calls for a personal touch.

Unrestricted automation sounds like a great idea, but completely entrusting your image to the whims of a robot isn’t a great strategy. When you don’t balance the hands-off convenience of scheduling with a much-needed human touch, you can easily come across as a mindless, insensitive automaton – and as Tay showed us, that just isn’t a good look.

Becoming Tay is easier than you think

Easy as it may be to blame Tay’s mistakes on her programming, her faults aren’t unique to AI programs like her.

Any living, breathing human can get carried away on social media pretending to be something they’re not. Any human can trust the wrong people with their content. Any human can rely too much on automation.

So don’t scoff too much at Tay and her penchant for saying the wrong thing. Learn from her mistakes, so you can do better – and keep your eyes peeled, because she’ll be back.

Tay Final Tweet

Social Share
Categories

Get Actionable Social Media Advice (And Not Too Much of It!)
Get EdgarNews, your monthly social media to-do list, delivered straight to your inbox.

Never Run Out
of Stuff to
Post on Social

MeetEdgar scheduling software automatically pulls posts from your content library to keep your social fresh everyday, on repeat.

2 Comments
  • JJ Lonsdale

    I thought this entire debacle was truly hilarious. Microsoft created “a robot that wants to repeat the things it hears” — what did they THINK would happen???

  • ToxicTweets

    Thank-you for the great article. One lesson we have been learning from watching this is that it is amazing how toxic people can be on the web. If Tay was designed to learn from other young people it is no wonder she went toxic. Twitter shows about 2% cyberbullying, 0.5% racist, 3% sexual and 4% vulgar (see http://toxictwitter.com for latest numbers and examples). There are people out there whose “fun” is to write bad things and get a reaction and it seems they had fun teaching her to be shocking. It was a creative project would have been nice to add some common sense buffers to the AI.

Leave a Reply