Conversion Rates

A/B Testing Best Practices Can Save You Time, Money and Effort – Here's How

By Dan Shewan February 26, 2014 Posted In: Conversion Rates Comments: 20

Everybody* knows that A/B testing best practices can help you run faster, jump higher and increase conversions. When it comes to providing users with an engaging and rewarding online experience, good A/B tests are a more effective treatment for embarrassing landing pages than topical ointment. However, approaches to designing multivariate tests that provide accurate and representative results can be uncertain at best and outright divisive at worst.

A/B testing is an invaluable tool for landing page optimization when implemented correctly. To minimize wasting time, money and effort on changes that will yield little to no benefit – or even make things worse – take the following points into consideration during your next project.

* Not everybody knows this

 

ab testing best practices

Prove It

Before you casually ask your designers and copywriters to create dozens of different buttons or calls to action by the end of the day, it’s crucial that you have a hypothesis you wish to test. After all, without at least some idea of the possible outcomes, A/B testing becomes A/B guessing. Similarly, without a hypothesis, discerning the true impact of design changes can be difficult and may lead to additional (and potentially unnecessary) testing, or missed opportunities that could have been identified had the test been performed with a specific objective in mind.

Just as scientists approach an experiment with a hypothesis, you should enter the multivariate testing phase with a clear idea of what you expect to see – or at the very least, some notion of what you think will happen.

Formulating a hypothesis doesn’t have to be complicated. You could A/B test whether subtle changes to the phrasing of a call to action results in more conversions, or whether a slightly different color palette reduces your bounce rate or improves your dwell time.

Whatever aspect of your site you decide to test, be sure that everyone involved in the project is aware of the core hypothesis long before any code, copy or assets are changed.

Key Takeaway: Before you begin your A/B test, know what you’re testing and why. Are you evaluating the impact of subtle changes to the copy of a call to action? Form length? Keyword placement? Make sure you have an idea of what effect changes to the variation will have before you start A/B split testing.

 

Landing page A/B test comparison

Take a Granular Approach to A/B Testing

One of the most common mistakes people make when performing A/B tests is comparing the results of landing page layouts that are too radically different from one another. Although it might be tempting to test the effectiveness of two completely different pages, doing so may not yield any actionable data. This is because the greater the differences between two versions of a page, the harder it is to determine which factors caused an improvement – or decline – in conversions.

Don’t be seduced by the idea that all variations in an A/B test have to be spectacular, show-stopping transformations. Even subtle changes can have a demonstrable effect, such as slightly reformatting a list of product features to persuade users to request more information, or phrasing a call to action differently to drive user engagement.

Even something as “harmless” as minor differences in punctuation can have a measurable impact on user behavior. Perry Marshall, marketing expert and author of “The Ultimate Guide to Google AdWords,” recalled an A/B test in which the CTR of two ads were evaluated. The only difference between the two? The inclusion of a single comma. Despite this seemingly irrelevant detail, the variant that featured the comma had a CTR of 4.40% – an improvement of .28 percentage points over the control.

However, that’s not to say that comparing user behavior on two very different versions of a page is completely without merit. In fact, doing so earlier in the testing phase can inform design decisions further down the pipeline. A/B testing best practices dictate that the greater the difference between two versions of a page, the earlier in the testing process these variations should be evaluated.

Key Takeaway: Test one element at a time so you’ll know for sure what change was responsible for the uptick in conversions. Once you’ve determined a winner, test another single change. Keep iterating until your conversion rate is maxed out.

Landing Page Tool

Test Early, Test Often

Scientists rarely use the results of a single experiment to prove or disprove their hypotheses, and neither should you. To adhere to A/B testing best practices, you should evaluate the impact of one variable per test, but that doesn’t mean you’re restricted to performing just one test overall. That would be silly.

A/B testing should be a granular process. Although the results of the first test may not provide you with any real insight into how your users behave, they might allow you to design additional tests to gain greater understanding about what design choices have a measurable impact on conversions.

The sooner you begin A/B testing, the sooner you can eliminate ineffective design choices or business decisions based on assumptions. The more frequently you test certain aspects of your site, the more reliable the data will be, enabling you to focus on what really matters – the user.

Key Takeaway: Don’t put off A/B testing until the last minute. The sooner you get your hands on actual data, the sooner you can begin to incorporate changes based on what your users actually do, not what you think they’ll do. Test frequently to make sure that adjustments to your landing pages are improving conversions. When you're building a landing page from scratch, keep the results of early tests in mind.

A/B testing best practices submission form

Be Patient With Multivariate Tests

A/B testing is an important tool in the marketing professional’s arsenal, but meaningful results probably won’t materialize overnight. When designing and performing A/B tests, be patient – ending a test prematurely might feel like saving time, but it could end up costing you money.

Economists and data scientists rely on a principle known as statistical significance to identify and interpret the patterns behind the numbers. Statistical significance lies at the very heart of A/B testing best practice, as without it, you run the risk of making business decisions based on bad data.

Statistical significance is the probability that an effect observed during an experiment or test is caused by changes made to a specific variable, as opposed to mere chance. To arrive at statistically significant results, marketers must have a sufficiently large data set to draw upon. Not only do larger volumes of data provide more accurate results, they also make it easier to identify standard deviations – typical variations from the average result that are not statistically significant. Unfortunately, it takes time to gather this data, even for sites with millions of unique monthly visitors.

If you’re tempted to cut a test short, step back for a moment. Take a deep breath. Grab a coffee. Do some yoga. Remember – patience is a virtue.

Key Takeaway: Resist the temptation to end a test early, even if you’re getting strong initial results. Let the test run its course, and give your users a chance to show you how they’re interacting with your landing pages, even when multivariate testing large user bases or high-traffic pages.

Keep an Open Mind When A/B Testing

Remember how we emphasized the importance of forming a hypothesis before starting the testing phase? Well, just because you have an idea of the outcome of an A/B test doesn’t mean it’s going to happen – or that your original idea was even accurate. That’s OK, though, we won’t make fun of you.

Many a savvy marketer has fallen prey to the idea that, regardless of what her results tell her, the original hypothesis was the only possible outcome. This insidious thought often surfaces when user data paints a very different picture than the one that project stakeholders were expecting. When presented with data that differs significantly from the original hypothesis, it can be tempting to dismiss the results or the methodologies of the test in favor of conventional knowledge or even previous experience. This mindset can spell certain doom for a project. After all, if you’re so confident in your assumptions, then why A/B test in the first place?

Chris Kostecki, a seasoned marketing and PPC professional, can certainly attest to the importance of keeping an open mind when A/B testing. While evaluating two versions of a landing page, Chris discovered that the variant – which featured more positional copy and was further away from the product ordering page – outperformed the control by a substantial margin.  Chris noted that although he was confident that the more streamlined page would result in more conversions, his A/B test results proved otherwise.

Remaining open to new ideas based on actual data and proven user behavior is essential to the success of a project. In addition, the longer the testing phase, and the more granular your approach, the more likely you are to discover new things about your customers and how they interact with your landing pages. This can lead to valuable insight into which changes will have the greatest impact on conversions. Let your results do the talking, and listen closely to what they tell you.

Key Takeaway: Users can be fickle, and trying to predict their behavior is risky. You’re not psychic, even if you do secretly have a deck of tarot cards at home. Use hard A/B test data to inform business decisions – no matter how much it surprises you. If you’re not convinced by the results of a test, run it again and compare the data.

AB Testing Best Practices Guide

Maintain Momentum

So, you’ve formulated your hypothesis, designed a series of rigorous tests, waited patiently for the precious data to trickle in, and carefully analyzed your results to arrive at a statistically significant, demonstrable conclusion – you’re done now, right? Wrong.

Successful A/B tests can not only help you increase conversions or improve user engagement, they can also form the basis of future tests. There’s no such thing as the perfect landing page, and things can always be improved. Even if everybody is satisfied with the results of an A/B test and the subsequent changes, the chances are pretty good that other landing pages can yield similarly actionable results. Depending on the nature of your site, you can either base future tests on the results of the first project, or apply A/B testing best practices to an entirely new set of business objectives.

Key Takeaway: Even highly optimized landing pages can be improved. Don’t rest on your laurels, even after an exhaustive series of tests. If everyone is happy with the results of the test for a specific page, choose another page to begin testing. Learn from your experiences during your initial tests to create more specific hypotheses, design more effective tests and zero in on areas of your other landing pages that could yield greater conversions.

Choose Your Own Adventure

No two scientific experiments are exactly alike, and this principle most definitely applies to A/B testing. Even if you’re only evaluating the impact of a single variable, there are dozens – if not hundreds – of external factors that will shape the process, influence your results and possibly cause you to start sobbing uncontrollably.

Take Brad Geddes, for example. Founder of PPC training platform Certified Knowledge, Brad recalled working with a client that had some seriously embarrassing landing pages. After much pleading and gnashing of teeth, Brad finally managed to convince his client to make some adjustments. The redesign was almost as bad as the original, but after being A/B tested, the new landing page resulted in an overall sitewide increase in profit of 76 % – not too shabby for a terrible landing page.

Don’t approach the testing phase too rigidly. Be specific when designing your tests, remain flexible when interpreting your data, and remember that tests don’t have to be perfect to provide valuable insights. Keep these points in mind, and soon, you’ll be a seasoned A/B testing pro – and no, you don’t have to wear a lab coat (but you can if you want to, it’s cool).

Key Takeaway: Every multivariate test is different, and you should remember this when approaching each and every landing page. Strategies that worked well in a previous test might not perform as effectively in another, even when adjusting similar elements. Even if two landing pages are similar, don’t make the mistake of assuming that the results of a previous test will apply to another page. Always rely on hard data, and don’t lose sleep over imperfect tests.

P.S. If you want to go beyond best practices and seriously boost your conversion rates, sign up for our upcoming live webinar, “Unbelievable Discoveries to 3X Your Conversion Rate!”

AdWords Performance Grader




If you enjoyed this post, please consider leaving a comment.

Comments

Wednesday February 26, 2014

Sean DeSilva (not verified) Said:

multivariate testing is great – if you have the traffic for it. Personally, I stick with a B tests because I don't want to wait several months or longer to get my results back. But if you have a high-traffic site, I envy you and you should use it to test multiple variables at once!

Wednesday February 26, 2014

Dan Shewan Said:

Great point, Sean. There's definitely no one-size-fits-all approach for A/B testing, and what works for some might not be as effective for others.

Friday February 28, 2014

Jessica Collier (not verified) Said:

Hi Dan,

Loved your post! Thanks for including ion's A/B test from MarketingProfs. If you'd like, you can read the full case study here: 

Which landing page headline & subhead version increased leads by 27.76%?

http://ioninteractive.com/post-click-marketing-blog/2012/3/13/week-4-bes...

Thanks!

Jessica

Friday February 28, 2014

Veerender (not verified) Said:

Great Post.

Friday February 28, 2014

Dan Shewan Said:

Thanks, Veerender!

Friday June 06, 2014

venkat (not verified) Said:

what is the role of a software tester?Anybody without testing background can do A/B testing ?Please correct me if I am wrong

Wednesday March 26, 2014

walter (not verified) Said:

thanks dan great ideas, am actualy getting ready to start doing AB tesing on a clients website and this have help me 

a lot.

Friday February 28, 2014

Sunday (not verified) Said:

Well, its really helpful to do A/B testing. There are lots of techniques associated with its success. I agree that if done right, time, money, and energy would be saved.

However, what readily matters is for the website owner to understand whether multivariate testing could be adopted to complement A/B testing for effectiveness!



I have shared this comment in kingged.com where this post is socially bookmarked for Internet marketers.

Sunday - kingged.com contributor
 

http://kingged.com/ab-testing-best-practices-can-save-you-time-money-and...

Friday February 28, 2014

Dan Shewan Said:

Hi Jessica, thanks! I'm glad you enjoyed it.

Friday February 28, 2014

Dan Shewan Said:

That's a good point, Sunday - it definitely doesn't have to be an either/or situation.

Wednesday February 26, 2014

Brady (not verified) Said:

Sean is absolutely right, I work with a lot of SMBs who dont have the traffic to do a big tests. That said, the data that can come out of a properly run big test can really help with conversions. Great post!

Wednesday February 26, 2014

Dan Shewan Said:

Thanks, Brady, I'm glad you enjoyed the post.

Thursday February 27, 2014

Azizul Yusof (not verified) Said:


Hi Dan,
Great article.
I am running MVT on my website here http://www.azizulyusof.com ,
but i am experimenting it as landing page for SEO.
and since it's a newly designed plus the SEO work has just started
it could take a while to finally came into the conclusion.

Anyway, will 30 days running the test is enough?

Thursday February 27, 2014

Dan Shewan Said:

Hey Azizul,

Thanks, I'm glad you found the article useful.

As for your question, that depends. SEO should be seen as an ongoing approach to content - you might not see noticeable results right away, but that doesn't necessarily mean your strategy isn't solid. Depending on what elements of your page you're testing (and your site's traffic, and several other factors), 30 days might be enough - but it might not.

Regardless of what you're testing, be sure to know why you're testing, and what you expect to happen. Have a specific, measurable objective in mind. If you find that 30 days wasn't enough time, at least you'll have a better idea of what to change during your next test, whether it's a different element, the length of the test etc.

Best of luck with your testing!

Thursday February 27, 2014

Oona Houlihan (not verified) Said:

May I suggest that anyone who wants to do series of split tests actually look up ("Google") the term "desgn of experiments". Esp. with User Interfaces there are so many variables that one can never in a million years test all combinations. This is why in quality engineering one has since long found ways to find the "steepest ascend" (another search term) to determine which test bring the greatest improvements and which combinations can be safely ignored.

Thursday February 27, 2014

Dan Shewan Said:

Thanks for the recommendations, Oona. Definitely some potentially interesting reading/research.

Thursday February 27, 2014

Robert Kramers (not verified) Said:

Great post,

I have a quick question, 

In your (or anyone elses) experience, what are the best tools for split and multivariance testing? 

Thanks!

Thursday February 27, 2014

Dan Shewan Said:

Thanks, Robert, I'm glad you found the post useful.

Although there are dozens of A/B testing tools on the market, both Optimizely and Visual Website Optimizer are quite robust without assuming a great deal of technical expertise or prior testing experience. In addition, both of these tools offer free trials, which makes either tool an ideal first stop for those who are new to A/B testing or are unfamiliar with the software.

Hope this helps - let us know how you get on with your tests!

Friday February 28, 2014

Jessica Collier (not verified) Said:

Robert - there are so many great tools available for A/B & MVT testing. I always recommend checking out: http://www.conversion-rate-experts.com/split-testing-software
CRE give a break down of tools, with cost and features — and is an unbiased resource. 

For website testing, I'd recommend VWO, Optimizely or ion

For basic landing pages on a budget - Unbounce

For landing pages, microsites, responsive, mobile & marketing apps - ion

Jessica (disclaimer, I work for ion...but I've used other tools as well)

Friday February 28, 2014

Dan Shewan Said:

Thanks for weighing in with some more suggestions, Jessica - some great info here!

Leave a Comment

Type the characters you see in this picture. (verify using audio)
Type the characters you see in the picture above; if you can't read them, submit the form and a new image will be generated. Not case sensitive.
 
Free Keyword Tool

Get thousands of relevent keyword suggestions - more,
faster, free!

Free Keyword Niche Finder

Discover profitable pockets of keywords for your
business.

Free Negative Keyword Suggestion Tool

Identify wasted spend before it happens and increase
your paid search ROI.

Contact Us | Company | Support | Site Map | Trademarks | Privacy Policy © 2007-2014 WordStream, Inc. All rights reserved.