As Internet marketers, we loveÂ our testing, andÂ one of theÂ greatest benefits of email marketing over direct marketing is the immediacy of testing results.Â By ‘pre-testing’, we can use that immediacy to improveÂ the performance of our email campaigns.
A typical A/B test usually involvesÂ developingÂ two different versions of an email (e.g. different subject lines, including personalization, etc.), splitting the list of subscribersÂ into two randomly selected groups, and sending a different version to each group.Â The test is often run multiple times, results are analyzed, and the information is used to inform future campaigns.Â
Most marketers start with what I’ll call ‘macro tests’ which involve larger issues such as testingÂ different layouts,Â best time of day and day of week to send,Â etc.Â All of these type of macro tests are very important and establish best practices and guidelines for an email program.
However, there are situations in which elements specific to a campaign need to be tested – I’ll refer to those as ‘micro tests’.Â For example, maybe the creative director and product managerÂ disagree on which photo should be used in the email as a hero shot orÂ there are questions about the arrangement of words inÂ the subject line (i.e. which are most important to place toward the front).Â You could just A/B test the two approaches, sending each version to 1/2 of the list.Â However, if one versionÂ significantly outperforms the other, then you would have lost opportunity by sending out theÂ worst performingÂ version to 50% of your list.Â
Let’s look atÂ the resultsÂ (similar toÂ oneÂ of our client’s recent campaigns) of an email that was A/B tested withÂ 200,000 subscribers and in which version AÂ outperformedÂ version B:
The good news is that we did 20% better than if we would have sent version B to the entire list. However, the bad news is that weÂ performed 20% worse than if we had sentÂ versionÂ A to the entireÂ list.Â Of course, we didn’t know which would be the best version prior to the send.Â Pre-testing allows us to reduce the risk associated with sending a worse-performing email to a large percentage of our list.
AÂ pre-tests involves deployingÂ the initialÂ A/B test to a smaller, but statistically significant percentage ofÂ subscribers first and thenÂ sending the ‘winning’ version to the remainder ofÂ the list.Â Â For example, using the same number of subscribers and response rates in the example above, a pre-test sent to 20% of the list would generate the following results:
In this example, pre-testing improved results by 16% over straight A/B testing.Â The greater theÂ performance between the two versions, the more benefit provided (and risk-reduced) by pre-testing.
AÂ fewÂ caveats about pre-testing:
- Pre-tests are not suitable for all situations.Â Â For example, there are some tests (like testingÂ a new enewsletterÂ layout) that you are going to want to run multiple times involving as many subscribers in the the sample as possible.Â Also, you need to allow at least 24 hours between the pre-test and the send to the reaminder of the list so that you have enoughÂ data to reach a conclusion, so if the email is time sensitive, you may not have time for the pre-test.
- Even though you want the pre-test groups to be small, the groups need to be large enough to be statistically significant. (for more on sample sizes and statistical relevance, read Wayde Nelson’s response in a MarketingProf knowledge exchange answer)
- To helpÂ validate your approachÂ to pre-testing, run a few tests where you conduct a pre-test with your two versions and then deploy an A/B test to the remaining subscribers.Â If you don’t see the same results between your pre-test and full A/B tests, then you need to pre-test with a larger sample size or check to see if something else is impacting results (e.g. day of send).