fbpx

25% OFF ALL PLANS FROM NOW UNTIL JANUARY 15 – USE CODE NYEDGAR25 IN CHECKOUT

Our Email Open Rates Were Busted – Here’s How We Gave Them a Huge Boost

A while back, we realized something weird about our email open rates:

They weren’t that great.

They weren’t BAD, but they were pretty consistently below the industry average – and if we’re being honest, that didn’t feel too amazing.

With a few little tweaks, though, we got a HUGE boost to our email performance.

Better open rates, better click rates, more conclusive results for our tests – the total package!

How’d we do it?

And what do you need to know about your options for doing the same?

Here’s what we learned:

Get to know your email list

Every week, we send out a newsletter sharing our latest posts from this blog, along with some of our favorite recent reads from around the web.

(If you haven’t already, you can join here so you never miss a post.)

In addition to being a valuable source of traffic – more on how you can make that part of your own content strategy here – the responses we get to our newsletter every week help us understand what’s most interesting and relevant to our readers.

(That’s you!)

We’re able to keep an eye on which topics are most popular with our readersand which are the least popular. We even perform a weekly A/B test of the newsletter’s subject lines, so we can continually write better ones based on how they perform.

But none of that works without paying attention to ONE thing that’s easy to neglect:

The natural decay of your email list.

We’ve been building our list for years – and with well over 100k people subscribed to our weekly newsletter, it makes sense that not all of them are still as interested as they used to be, or even still receive it!

Email addresses are abandoned. People’s needs change. Mail gets sorted into different categories, inboxes, and tabs, which make it easy to miss or ignore.

(Think of how many emails you receive in a day. How many of those do you delete without even opening or reading them?)

Considering that perhaps not everyone on our list was still an engaged reader, we did a little investigating – and were surprised by what we found.

We’d been growing our list for years, but not monitoring for that decay – so by the time we finally did investigate, we discovered tens of thousands of inactive subscribers still on our list.

(Note: when you’re monitoring your own email list for decay, your definition of active or inactive may vary. Because we send our newsletter once per week, we consider recipients inactive if they haven’t opened a single email within the past three months.)

Here’s why that matters so much.

First of all, an email list that has experienced decay can cost you more money.

A service like MailChimp, for example – which is what we use – may charge more per month depending on the size of your list. If your list has decayed, you could be paying for a lot of recipients who either don’t see or don’t open your emails.

Second – and more importantly – decay can skew your test results and make it hard to determine what works for your business.

Take a subject line test for example.

When we send our weekly newsletter, we want to make the subject line as appealing as possible.

(Because why wouldn’t you, right?)

Using MailChimp, we set up an automated A/B test that measures two subject lines against each other.

Scientists in a lab

MailChimp sends our newsletter to about 20% of the people on our list – half of that group receives one subject line, and the other half gets the other. After a few hours, we can see which one had better open rates, and we send the newsletter to the remaining 80% of our list using that subject line.

If you don’t factor in decay, though, your tests may be a waste of time.

Imagine running the test we just described, but one of the two test groups is more heavily weighted with inactive subscribers, and the other is more heavily weighted with active ones.

The test results wouldn’t give you an accurate picture of which subject line is actually more effective, because the group weighted with inactive subscribers would have a lower open rate no matter what.

At best, the results are useless – and at worst, they could lead you to making bad choices influenced by untrustworthy data!

(We’ll show you an example of what this looks like in practice in just a second.)

So what can you do about email list decay?

You have two major options.

The first is to remove inactive subscribers from your list – an option that’s especially valuable if you’re trying to save on your email expenses by no longer paying to send to inactive subscribers.

You don’t have to shave people off your list with no warning, either! One common practice is to send a re-permission or reactivation campaign to inactive subscribers, giving them a chance to proactively remain on your list.

(You can check out some examples of these campaigns in this blog post from Marketo, or this one from emfluence.)

Here’s an example of what one of those might look like:

Screenshot of a re-permission email from Chipotle
Source: https://emfluence.com/blog/re-permission-campaigns-aged-lists

If you don’t want to cut subscribers from your list, though, you have another option – and this is the one that we took.

How we fixed the email decay problem

When we determined the extent of our own email list, we decided that instead of cutting inactive subscribers from our list entirely, we’d use them as a learning opportunity.

(For example, if subscribers who rarely open one of our emails suddenly open one in higher-than-average numbers, we might be able to draw some conclusions about why it spoke to them when little else does.)

That said, we also didn’t want those inactive subscribers skewing our test results, or keeping us from understanding the overall effectiveness of our emails.

Here’s what we did about it.

When we send the newsletter every week, we actually send it to two separate audience segments: one labeled active, and one labeled inactive.

Our active segment includes only subscribers who have met at least one out of two criteria:

  1. They have opened at least ONE email from us within the past three months, or
  2. They joined our email list within the past two weeks

(The second criteria ensures we’re not leaving out anyone who just joined the list, and hasn’t yet had much opportunity to open a newsletter.)

We still test our subject lines the way we talked about earlier, and because one of those segments includes only people who have expressed some form of interest within the past quarter of a year, we can assume that the results will be more accurate and relevant!

Here’s an example.

Take a look at the test results for one of our January newsletters, which we sent to about 20% of the people in our active subscriber segment:

Screenshot showing test results for active subscriber segment

(The blog post referenced in that subject line is this one, by the way.)

As you can see, this test showed us that one subject line was more compelling than the other, but that once people had opened the email, its contents did the job pretty reliably! It taught us a valuable lesson about subject line construction.

Compare that to the results for our inactive segment (which was sent to that segment in its entirety):

Screenshot showing test results for inactive subscriber segment

Notice anything?

The overall open and click rates are WAY lower – we can see that if someone hasn’t demonstrated interest within the past three months, that pattern is pretty likely to continue!

(Cue the sad trombone.)

If the inactive and active subscribers had been grouped together in a single segment, our data would have been skewed, and we would have drawn the wrong conclusions.

We might have made strategic decisions based on inaccurate data – and that means those decisions might have been really, really wrong!

This all goes back to something we mentioned at the very beginning of this post, too: using our newsletter as a way of learning about our audience.

Because we focus on active subscribers, our open and click rates more accurately represent our audience. That means we can develop a more accurate understanding of what topics interest our audience the most, and how we should write about those topics!

Plus, it gives us a much better understanding of how our open rates compare to our industry’s average.

(And that just makes us feel good.)

Your next steps

So there you have it – if you haven’t been keeping an eye on your email list’s decay, now is the time.

How many people on your list haven’t opened a single email in three months? Or six? Or twelve?

What happens to your performance stats when you stop including those subscribers in your tests?

What information are you missing out on that could lead you to making exciting new decisions – or avoiding bad ones?

Determine the point that separates your active and inactive subscribers, and experiment with segmentation.

If you’re like us, you might find that some of your results are a little surprising!

(And if you want to read more posts like this one, you can sign up for our own weekly newsletter right here!)

Social Share
Categories

Get Actionable Social Media Advice (And Not Too Much of It!)
Get EdgarNews, your monthly social media to-do list, delivered straight to your inbox.

Never Run Out
of Stuff to
Post on Social

MeetEdgar scheduling software automatically pulls posts from your content library to keep your social fresh everyday, on repeat.

14 Comments
  • Dani

    Can I ask for a bit more insight into what your MailChimp conditions look like for each active/inactive segment?

    I’m trying to separate our inactive users, but the only option I have is to select Campaign Activity > did not open > All campaigns within the last 3 months. Since the option on my active list for “did open” is “any” rather than “all”, I’m worried this will throw off my lists and some recipients will get the email twice.

    Can you share how you worked around that?

    Thanks!

    • Tom VanBuren

      Sure!

      First, we focus on our active list. The segment for that has two conditions, and we set recipients to match “any.” The first condition is “Date Added > Is After > [~2 weeks ago],” which makes sure we’re not excluding anyone who signed up recently and just wasn’t that interested in the past 1-2 newsletters we sent. The second condition is “Campaign Activity > Opened > Any campaigns sent within the last 3 months.” Send it just like that, without worrying about anyone else.

      After they’ve all been sent (you might need to wait a bit, depending on if you A/B tested anything), you can take care of your inactive segment. Replicate the campaign you sent to your active segment, then set just one condition for a new segment of your list: “Campaign Activity > Was Not Sent > [the campaign that was sent to the active segment].” And you’re done! It’ll mean that you’re sending your emails to the inactive and active segments a little asynchronously, but it’ll make your life a lot easier.

      Hope that helps!

      • Dani

        It does! However, we use MailChimp for several different lists (newsletter, clients, partners, etc.) and since I don’t want our clients to accidentally get our sales emails, I need to add that to my conditioning which muddies things a bit.

        Based on what I’ve found on MailChimp’s knowledge base, it sounds like “all” may actually be what we want, as convoluted as it sounds:

        https://kb.mailchimp.com/lists/segments/all-the-segmenting-options
        Campaign Activity | did not open | All of the Last 5 Campaigns
        Subscribers who opened none of the last five campaigns

        I think with that in mind, I can select my “Newsletter” group and then apply the Date Added and Campaign Activity conditions and get the right results.

        Does that sound right to you?

        • Tom VanBuren

          Ooh, gotcha!

          Yeah, that definitely complicates things. We, for example, use different conditions for a promo email than we do for a newsletter. (For a promo that we don’t want to send to current users, we have “Customer” and “Abandon” segments that we can exclude as conditions.)

          Since it sounds like you’re working with slightly different segmentation, I have to admit that I don’t feel 100% confident in saying what would be the most reliable solution – I *can* say, though, that the team at MailChimp is extraordinarily helpful, and may be able to help you nail this down a lot more reliably than I could!

  • M

    Did you actually boost your open rates? As shown here you simply made them look higher by only focusing on people who are engaged with your emails, not actually effecting any sort of change or improvement.

    If the “huge boost” mentioned in the title is the jump from 24.3% to 26.3% in the split test, did that hold true for the remaining 80% of the “active” segment?

    Furthermore, your “inactive” segment test appears to use only the one subject line instead of testing the same two subject lines. So I’m not sure how we are supposed to be comparing that or drawing any conclusions from it. In other words, if the improvement to your open rates is coming from the split test you did on the “active” segment, then it’s impossible to say that you would have drawn different conclusions if you had not segmented your list, and you have not actually learned anything yet from segmenting it!

    • Tom VanBuren

      Good questions all around! I’ll do my best to address them:

      As far as whether something like this *actually* affects change (as opposed to just artificially juicing a vanity metric), we think that segmenting absolutely does impact our open rates, and particularly impacts the efficacy of our tests. For example, you wondered if the test that we showed held true for the remainder of the “active” segment, and it absolutely did – the email ultimately scored an open rate of 26.2%, which was only .1% off the open rate of the winner in that test. Sorry for not mentioning that in the post – in retrospect, it’s easy to see how anticlimactic it was to leave that detail out!

      (You raise a very valid point, also. After all, you could segment out anyone who hasn’t opened an email in, say, two weeks, and potentially inflate your open rates a LOT. That’s why we segment subscribers out only if they haven’t opened any emails in three months – we consider that a fairly generous amount of time, and we’ve found that there’s very little variation between people who haven’t opened in three months and people who haven’t opened in six or more.)

      Regarding the two emails tested to the “inactive” segment, you’re right – the subject lines weren’t what we were testing when we emailed those subscribers. However, for the purposes of this particular exercise, that isn’t necessarily relevant – what’s relevant is that the open rate overall was very low. (It’s not very likely that if we’d tested the same two subject lines as we did with the “active” segment that one open rate would have been 1.3%, and the other 20%.)

      Think of it like this: when we tested the two subject lines to the active segment, Line 1 was the loser, and Line 2 was the winner. Now imagine that the active and inactive segments were combined for this test, and the randomly-selected group receiving Line 1 was comprised mostly of active subscribers, and the group receiving Line 2 was comprised mostly of inactive ones. It’s not unlikely that the Line 2 group would have had a dramatically lower open rate, because the majority of the people in that group are reliably unengaged with our emails. This would have made us erroneously believe that Line 1 was the better subject line, when in reality, it isn’t – at least, not with the active and engaged subscribers we are most focused on.

      Would it necessarily have happened that way? Not for sure, but that’s because tests are unpredictable. The best we can do is to make them as accurate as possible, which is what segmenting our list has done. The result is that our open rates improve because we’re conducting better tests, AND we have a better picture of the overall effectiveness of our email marketing in the context of our industry.

      Sorry that was so long-winded – there’s a lot to unpack! Hopefully this demystifies some of our thought processes around all this, and why we consider it an actual, practical win, as opposed to tomfoolery and number massaging.

          • Brie

            Does Mailchimp allow you to run reports selecting for criteria like “last open date” by subscriber? I use Active Campaign and the only way I could accomplish this was to literally go through and select the subscribers from each separate email/campaign that I had sent out during the last 3 months, and add them to a list.

          • Tom VanBuren

            Good question! If what you’re trying to do is see a list of every subscriber who has opened an email within the past 3 months, here’s how you can do that:
            1. Go to “Lists” and click on the list for which you want to create this segment
            2. Click “Create a segment” and set the condition “Campaign Activity > Opened > Any Campaigns within the last 3 months,” then click “Preview Segment”
            3. Click the “Export Segment” button
            MailChimp will export the segment as a CSV, so you’ll have everyone fitting your criteria all listed in one place and ready to go!

Leave a Reply