Society teaches us over and over that artificial intelligence is terrifying.
The Terminator. The Matrix. The Roomba Vacuum Cleaning Robot. Time and again, we learn that machines that can think for themselves will eventually think less about sweeping our floors and more about sweeping mankind and all its hubris off the face of the Earth.
And yet, we keep trusting that AI won’t disappoint us!
Case in point: Tay.
At the end of March, Microsoft launched Tay, a Twitter chatbot designed to learn how young people tweet and mimic their habits. Thanks in large part to online trolls manipulating her learning process, though, she picked up some nasty habits.
So all in all, it could have gone better.
Still – that doesn’t mean we can’t learn something from it!
In the best stories about AI, robots and humans learn from each other.
And just as Tay learned so many things that we just can’t repeat here, we learned a few valuable lessons from her.
So, what did Tay teach us about social media?
Authenticity matters on social media – and when you’re trying too hard to be something you’re not, people can tell.
Tay was designed to talk like a young person. She doesn’t always use capital letters! She loves emoji! She’s “got zero chill!”
Ah, youths. If you squint your eyes, tilt your head, and don’t know who she is ahead of time, Tay is actually kind of convincing.
Or at least, she was, until she started sharing some of her more colorful opinions.
By that point, proving her fakeness – and her susceptibility to being tricked – had become the Internet’s biggest trend since planking. In a matter of hours, she went from being semi-believable as an average millennial girl to looking more like this:
(Except, also a virulent racist.)
Don’t try too hard to be something you’re not on social media.
Your sense of identity needs to come from someplace genuine, not something manufactured and derivative. Trying too hard to emulate others can leave you looking like Tay – saying things you probably shouldn’t.
Think this doesn’t affect people as much as robots? Think again. Just look at IHOP, whose sense of humor on social media closely resembles that of their competitor, Denny’s. They tweeted a joke comparing pancakes to female anatomy, and it went about as well as you expect.
Don’t force your identity so hard that you end up posting things you’ll regret. Developing a brand and a style that suit you isn’t a matter of imitating others – it’s about being original, and being yourself.
Speaking of being yourself…
It’s easy to make a mistake – and on social media, mistakes live forever.
Microsoft, for example, may have deleted almost every tweet from Tay’s account, but you can still find screencaps of her most provocative observations scattered all over the web.
Tay is an example of what can happen when you trust the wrong people with your content – in this case, a robot that wants to repeat the things it hears, no matter how repugnant.
You can’t trust just anyone with your web content, especially on social media.
(And again, this isn’t a problem that only affects robots!)
Got that “if you want something done right, you have to do it yourself” mindset? Use a reliable strategy for writing and scheduling updates in advance, so you can minimize your risk of making an embarrassing mistake.
And while you’re at it…
Ultimately, Tay is a cautionary tale about automation without accountability. Once she developed a taste for tweeting sentiments one might expect to see on a white supremacist’s Pinterest board, she kept it up for hours before Microsoft finally pulled the plug.
Automating certain types of social media posts can save you a lot of time – but if you’re careless about what’s being shared, it’s only a matter of time before something goes wrong.
Pay attention to the world around you. Know what’s being posted and when. Assume some accountability and interact live when what you’re saying calls for a personal touch.
Unrestricted automation sounds like a great idea, but completely entrusting your image to the whims of a robot isn’t a great strategy. When you don’t balance the hands-off convenience of scheduling with a much-needed human touch, you can easily come across as a mindless, insensitive automaton – and as Tay showed us, that just isn’t a good look.
Easy as it may be to blame Tay’s mistakes on her programming, her faults aren’t unique to AI programs like her.
Any living, breathing human can get carried away on social media pretending to be something they’re not. Any human can trust the wrong people with their content. Any human can rely too much on automation.
So don’t scoff too much at Tay and her penchant for saying the wrong thing. Learn from her mistakes, so you can do better – and keep your eyes peeled, because she’ll be back.