A/B testing is a fantastic method for figuring out the best online promotional and marketing strategies for your business. It can be used to test everything from website copy to sales emails to search ads. And the advantages A/B testing provide are enough to offset the additional time it takes.

 

What is A/B testing?

A/B testing (sometimes called split testing) is comparing two versions of a web page to see which one performs better. You compare two web pages by showing the two variants (let’s call them A and B) to similar visitors at the same time. The one that gives a better conversion rate, wins!

A:B Testing

All websites on the web have a goal – a reason for them to exist

  • ecommerce websites want visitors buying products
  • SaaS web apps want visitors signing up for a trial and converting to paid visitors
  • News and media websites want readers to click on ads or sign up for paid subscriptions

Every business website wants visitors converting from just visitors to something else. The rate at which a website is able to do this is its “conversion rate”. Measuring the performance of a variation (A or B) means measuring the rate at which it converts visitors to goal achievers.

 

Conversion rates and what to measure?

To perform an A/B test you will need to measure a conversion rate; the objective of the test being to increase this conversion rate. The most obvious form of conversion rate is sales and can be worked out as the number of sales per 100 visits; so if you average 2 sales per hundred visits your conversion rate is 2%. Raising this conversion rate from 2% to just 2.5% would mean a 25% increase in sales, when viewed this way conversion rates really should be something worth paying a lot of attention to.

Conversion rates can also be measured in terms of revenue. Instead of the number of sales you can measure the impact of a change on sales revenue.

However conversion rates can be any measurable action and are not just restricted to ecommerce sites and sales. Conversion rates can include:

  • Sales
  • Leads (e.g. booking a test drive or requesting an information pack)
  • Newsletter sign ups
  • Clicking on revenue generating banners or affiliate links
  • Spending a minimum amount of time on the site (this is great for detecting low quality pages where visitors are not engaged)

 

What to test?

Once you have decided what conversion rate you want to improve the next stage is to work out what to change on the page to try and increase conversions. Look at the various elements you have on the page in question that would be changed, these may include:

 

  • Headings – size, color wordingA:B testing 2
  • Images – placement, different images
  • Content – amount, wording, font, size and placement of content on the page
  • Call to action buttons such as: buy now, sign-up and subscribe buttons can be different sizes, colors, in different places on the page and have different wording.
  • Social media buttons – placement, size and wording are all worth testing
  • Logo and strapline
  • Use of trade association and online trust seals such as VeriSign

 

The A/B Testing Process

The correct way to run an AB testing experiment (or any other experiment for that matter) is to follow the Scientific Method. The steps of the Scientific Method are:

  • Ask a question: “Why is the bounce rate of my website higher than industry standard?”
  • Do background research: Understand your visitors’ behavior using Google Analytics and any other analytics tools running on your website.
  • Construct a hypothesis: “Adding more links in the footer will reduce the bounce rate”.
  • Calculate the number of visitors/days you need to run the test for: Always calculate the number of visitors required for a test before starting the test. You can use our A/B Test Duration Calculator.
  • Test your hypothesis: You create a site wide A/B test in which the variation (version B) has a footer with more links. You test it against the original and measure bounce rate.
  • Analyse data and draw conclusions: If the footer with more links reduces bounce rate, then you can conclude that increased number of links in the footer is one of the factors that reduces bounce. If there is no difference in bounce, then go back to step 3 and construct a new hypothesis.
  • Report results to all concerned: Let others in Marketing, IT and UI/UX know of the test results and insights generated.

Do’s And Don’ts

Don’ts

  • When doing A/B testing, never ever wait to test the variation until after you’ve tested the control. Always test both versions simultaneously. If you test one version one week and the second the next, you’re doing it wrong. It’s possible that version B was actually worse but you just happened to have better sales while testing it. Always split traffic between two versions.
  • Don’t conclude too early. There is a concept called “statistical confidence” that determines whether your test results are significant (that is, whether you should take the results seriously). It prevents you from reading too much into the results if you have only a few conversions or visitors for each variation. Most A/B testing tools report statistical confidence, but if you are testing manually, consider accounting for it with an online calculator.
  • Don’t surprise regular visitors. If you are testing a core part of your website, include only new visitors in the test. You want to avoid shocking regular visitors, especially because the variations may not ultimately be implemented.
  • Don’t let your gut feeling overrule test results. The winners in A/B tests are often surprising or unintuitive. On a green-themed website, a stark red button could emerge as the winner. Even if the red button isn’t easy on the eye, don’t reject it outright. Your goal with the test is a better conversion rate, not aesthetics, so don’t reject the results because of your arbitrary judgment.

Do’s

  • Know how long to run a test before giving up. Giving up too early can cost you because you may have gotten meaningful results had you waited a little longer. Giving up too late isn’t good either, because poorly performing variations could cost you conversions and sales. Use a calculator (like this one) to determine exactly how long to run a test before giving up.
  • Show repeat visitors the same variations. Your tool should have a mechanism for remembering which variation a visitor has seen. This prevents blunders, such as showing a user a different price or a different promotional offer.
  • Make your A/B test consistent across the whole website. If you are testing a sign-up button that appears in multiple locations, then a visitor should see the same variation everywhere. Showing one variation on page 1 and another variation on page 2 will skew the results.
  • Do many A/B tests. Let’s face it: chances are, your first A/B test will turn out a lemon. But don’t despair. An A/B test can have only three outcomes: no result, a negative result or a positive result. The key to optimizing conversion rates is to do a ton of A/B tests, so that all positive results add up to a huge boost to your sales and achieved goals.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>