A/B Testing
What is A/B Testing?
Short Definition: Simultaneous testing of two or more variations to see which variation is more successful.
A/B testing, also known as split testing, is the practice of taking two or more variants of something, such as a landing page or email, and comparing their performance. This is done by splitting viewers (i.e. web traffic) into groups at random and showing different variants to different groups. Marketers can then identify which version did a better job of meeting conversion goals and use the “winning” version or test it against new “challengers” to continue optimizing.
Let’s look at an example: a company that sells industrial blenders creates a landing page for their newest product. They’re unsure whether the landing page will perform better if it features 360-degree photos of the blender or a video showing the blender in action, so they create one version of the landing page with the photos and one with the video. They use their landing page creation platform to split their traffic 50-50 between the two pages. After a month, they see that the conversion rate for the page with the video is 20% higher, so they decide to focus on producing more pages with high-quality videos.
A/B testing can be used to test just about any element of a web page, app screen, online ad, or email. Common variables that marketers might choose to test include the text in a call-to-action (CTA) button, the subject line of an email, or position of a CTA button on a landing page.
Why Is A/B Testing Important?
A/B testing helps you determine which elements work well and which need to be retooled. Rather than just guessing what you think your audience will like, you can run a real-world experiment to see what gets a positive response from web users. Not only can this save you from wasting time on unsuccessful campaigns, it can help you:
- Gain a better understanding of your target audience
- Reduce your bounce rate
- Increase user engagement
- Increase your conversion rate
- Reduce the risk of implementing major changes
A Few A/B Testing Best Practices
Test one variable at a time. As with any good experiment, you should start with a hypothesis and test one variable at a time. If you try to test lots of variables at once, you won’t be able to pinpoint why one page performed better than another.
Identify a primary metric to concentrate on. Before you set up your test, decide what metric is going to be the most important for success. If you’re testing a landing page that asks users to download an eBook, for example, the number of download may be your most important metric.
Split your audience randomly. For the results of an A/B test to truly be representative, you need to randomly divide your audience into sample groups of the same size, with each sample group seeing a different variation. Most tools that have A/B testing built in, like Unbounce and Mailchimp, will do this for you.
Pay attention to audience segments. Rather than just looking at the overall average results for your A/B tests, dig into the results by audience segments like geographic region, device type, or new vs. existing customer.
Get customer insights. Not sure what to test? Run a survey or a focus group to find out what customers think would make your website, app, or emails more appealing, and set up A/B tests based on their feedback.
« Navigate to Glossary Index