When you setup a new campaign, you can choose to run a standard campaign, where every recipient receives exactly the same email, or an A/B campaign. This page explains more about what A/B testing is, why you might use it and how it all works.
What is A/B testing?
A/B testing (also called split testing) is a way of working out which of two different options will be the most effective. When used for your email campaigns, that means A/B testing can help you decide which email will the best one to send to your subscriber list from two different options which are labeled A and B.
The basic idea is that you send Version A to a small number of your subscribers, and Version B to another small group. The results (opens or clicks) are measured, and a winner is chosen. Then that winning version is sent out to all the remaining subscribers.
Note: Once you start the A/B test, you won't be able to change anything about the campaign, so be sure to test thoroughly before kicking off your campaign! In addition, any email addresses you add into your list after you start the A/B test will not be included in either version A or B, so those recently added individuals will need to be targeted separately.
Why should I use A/B testing?
A/B testing offers a simple way for you to get the most opens and clicks from your campaigns. With A/B testing you could:
- Test two subject lines, to see which one results in the most opens or clicks
- Test two completely different designs to see which one gets the best results
- Test a "Free Shipping" promotion versus a "15% Off" promotion
- Test two different "from names" to see which generates the most opens
As well as comparing as many other variables as you can think of. Since A/B testing is built right into your account, it's really easy to do. For any campaign, there are three different areas you can test. The subject line, the 'from' name and email address and the actual email content itself.
1) Subject line test
Taking a small subset of your recipients, say 20%, you'll send half an email with one subject line, and the other half a different subject line. Then you can watch the results come in for as long or as short a time as you feel comfortable with. Once you know which one performs the best, you send the email, with that winning subject line in place, to the remaining 80% of your subscribers.
2) Email content test
You'll create two different emails to test and send them both to a subset of your recipients. Then (automatically or manually) the winning version is sent to the rest of your recipients. You could have only slight design differences, or something more major, depending on what you want to test.
3) From name and from email test
In this test, the content and the subject are the same for each email, and only the 'from' name and the 'from' address differ. Which ever gains the most opens or clicks (at your choice) wins, and is sent to the rest of the list.
How do I actually run an A/B test of my campaign?
When you create a new campaign, you have two options, regular campaign or A/B split campaign.
Click the A/B tab to show your A/B testing options.
Depending on which option you chose, you'll see some extra fields appear (to enter your alternate subject or 'from' details). If you chose to A/B test email content, you will get an extra step when defining the content for your campaign.
Instead of just importing once, you go through the import process twice, once for version A and once for version B.
Select your subscriber lists as normal, and you'll move on to finalize the settings for your test. At this last step, you get to decide how many people get Version A and Version B, and how many receive the winning version.
Use the slider to set the sizes of each group. You also define how the winner is selected, whether it is from most opens, most unique clicks, or the most clicks on a specific link in your email.
Finally, you'll need to select how long to wait before deciding the winner (from one hour up to several days). Keep in mind that you can always manually declare a winner at any time by visiting the reports.
Once that is all done, you can send yourself tests of both versions, and then fire off your campaign as normal.
What happens if there is no winner of an A/B test?
If, at the end of the A/B test, the two versions are tied, version A will be sent to the rest of the list.
What do A/B test reports look like?
While your A/B test is running (before the time period you set expires, or you manually declare a winner) there is a special A/B test report you will see. It looks like this:
You can easily see which version is performing better according to the measurement you chose, and you can also see the time remaining for the test. Notice the link in the top right to manually choose a winner.
You might choose to do that if one version is clearly better, so you don't have to wait for time to run out. That's all there is to it!
Once your winner has been decided, the remaining recipients will receive the winning version and you'll be able to see your normal campaign reports. You can still access the A/B results too, in case you want to revisit them when designing your next campaign.