Table of Contents

    What is A/B testing?

    A/B testing (also known as split testing or bucket testing) is a methodology for comparing two versions of a webpage or app against each other to determine which one performs better. A/B testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.

    graphical user interface, diagram, application

    Running an A/B test that directly compares a variation against a current experience lets you ask focused questions about changes to your website or app and then collect data about the impact of that change.

    Testing takes the guesswork out of website optimization and enables data-informed decisions that shift business conversations from "we think" to "we know." By measuring the impact that changes have on your metrics, you can ensure that every change produces positive results.

    How A/B testing works

    In an A/B test, you take a webpage or app screen and modify it to create a second version of the same page. This change can be as simple as a single headline, button or be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as control or A) and half are shown the modified version of the page (the variation or B).

    graphical user interface, application

    As visitors are served either the control or variation, their engagement with each experience is measured and collected in a dashboard and analyzed through a statistical engine. You can then determine whether changing the experience (variation or B) had a positive, negative or neutral effect against the baseline (control or A).

    chart, line chart

    Why you should A/B test

    A/B testing allows individuals, teams and companies to make careful changes to their user experiences while collecting data on the impact it makes. This allows them to construct hypotheses and to learn what elements and optimizations of their experiences impact user behavior the most. In another way, they can be proven wrong—their opinion about the best experience for a given goal can be proven wrong through an A/B test.

    More than just answering a one-off question or settling a disagreement, A/B testing can be used to continually improve a given experience or improve a single goal like conversion rate optimization (CRO) over time.

    A B2B technology company may want to improve their sales lead quality and volume from campaign landing pages. In order to achieve that goal, the team would try A/B testing changes to the headline, subject line, form fields, call-to-action and overall layout of the page to optimize for reduced bounce rate, increased conversions and leads and improved click-through rate.

    Testing one change at a time helps them pinpoint which changes had an effect on visitor behavior, and which ones did not. Over time, they can combine the effect of multiple winning changes from experiments to demonstrate the measurable improvement of a new experience over the old one.

    This method of introducing changes to a user experience also allows for optimization toward desired outcomes, making crucial steps—such as those in a product marketing campaign—more effective.

    By testing ad copy, marketers can learn which versions attract more clicks. By testing the subsequent landing page, they can learn which layout converts visitors to customers best. The overall spend on a marketing campaign can actually be decreased if the elements of each step work as efficiently as possible to acquire new customers.

    graphical user interface, diagram, application

    A/B testing can also be used by product developers and designers to demonstrate the impact of new features or changes to a user experience. Product onboarding, user engagement, modals and in-product experiences can all be optimized with A/B testing, as long as goals are clearly defined and you have a clear hypothesis.

    A/B testing process

    The following is an A/B testing framework you can use to start running tests:

    • Collect data: Your analytics tool (for example Google Analytics) will often provide insight into where you can begin optimizing. It helps to begin with high traffic areas of your site or app to allow you to gather data faster. For conversion rate optimization, make sure to look for pages with high bounce or drop-off rates that can be improved. Also consult other sources like heatmaps, social media and surveys to find new areas for improvement.

    • Identify goals: Your conversion goals are the metrics that you are using to determine whether or not the variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases.

    • Generate test hypothesis: Once you've identified a goal you can begin generating A/B testing ideas and test hypotheses for why you think they will be better than the current version. Once you have a list of ideas, prioritize them in terms of expected impact and difficulty of implementation.

    • Create different variations: Using your A/B testing software (like Optimizely Experiment), make the desired changes to an element of your website or mobile app. This might be changing the color of a button, swapping the order of elements on the page template, hiding navigation elements, or something entirely custom. Many leading A/B testing tools have a visual editor that will make these changes easy. Make sure to test run your experiment to make sure the different versions as expected.

    • Run experiment: Kick off your experiment and wait for visitors to participate! At this point, visitors to your site or app will be randomly assigned to either the control or variation of your experience. Their interaction with each experience is measured, counted and compared against the baseline to determine how each performs.

    • Wait for the test results: Depending on how big your sample size (the target audience) is, it can take a while to achieve a satisfactory result. Good experiment results will tell you when the results are statistically significant and trustworthy. Otherwise it would be hard to tell if your change truly made an impact.

    • Analyze results: Once your experiment is complete, it's time to analyze the results. Your A/B testing software will present the data from the experiment and show you the difference between how the two versions of your page performed and whether there is a statistically significant difference. It is important to achieve statistically significant results so you’re confident in the outcome of the test.

    If your variation is a winner, congratulations 🎉! See if you can apply learnings from the experiment on other pages of your site and continue iterating on the experiment to improve your results. If your experiment generates a negative result or no result, don't worry. Use the experiment as a learning experience and generate new hypothesis that you can test.

    diagram

    Whatever your experiment's outcome, use your experience to inform future tests and continually iterate on optimizing your app or site's experience.

    Creating a culture of A/B testing

    Great digital marketing teams make sure to involve multiple departments in their experimentation program. When testing across different department and touchpoints, you can increase the confidence level that the changes you’re making to your marketing are statistically significant and making a positive impact on your bottom line.

    For example, besides running A/B tests on websites and apps, marketing teams make A/B testing part of their marketing strategy by testing on:

    • Email campaigns - By measuring things like open rates and click rates, you can achieve higher conversions and optimize for the best email subject lines in email marketing.

    • Product pages - By testing clicks and views on products, ecommerce companies, B2B companies and any other type of company that sells goods and services can optimize for higher conversions.

    • LinkedIn and other advertising platforms - Most networks have native A/B testing capabilities, digital marketing teams can increase their clickthrough, sales and even return on advertising spend (ROAS).

    • Pricing - In ecommerce and B2B, changing pricing for subscriptions, products and services can lead to higher return on investment (ROI) and lower cost of acquisition (CAC).

    • CTA buttons - Changing text, button colors, design and layout can help optimize click-through-rates (CTR)

    A/B test results

    Depending on the type of website or app you’re testing on, goals will differ. For example, retail website would run more tests to optimize for purchases, where a B2B website might run more experiments to optimize for leads.

    This also means your results will look different depending on the type of site or app you have. Typically, the goals are set before starting the A/B test, and evaluated at the end. Some A/B testing tools allow you to peek at results real-time as they come in, or change the goals of your tests after completing the experiment.

    A test results dashboard shows 2 (or more) variants, their respective audience and it’s goal completions. Say you optimize for clicks on a call-to-action (CTA) on a website, a typical view would contain visitors and clicks, as well as a conversion rate — the percentage of visitors that resulted in a conversion.

    chart, line chart

    Statistical significance in A/B test results

    When running an A/B tests, you’re measuring results against the baseline (your A version). After a test has concluded, you want to say with certainty that your change (version B) has resulted in an uplift for your marketing metrics. You do this by confirming your results are statistically significant. This is a way of saying that the version you’re testing against has outperformed your control variant on your chosen metrics.

    In digital marketing, this is known as statistical significance, and A/B test result pages will likely give you a result variance back as well. The variance is how broad the measured metrics are ranging between the variants.

    Data-driven marketers use these results to say with certainty that the changes they’re making to their website or app are sound, before implementing them for all users.

    Segmenting A/B tests

    Larger sites and apps often employ segmentation for their A/B tests. If your number of visitors is high enough, this is a valuable way to test changes for specific sets of visitors. A common segment used for A/B testing is splitting out new visitors versus return visitors. This allows you to test changes to elements that only apply for new visitors, like signup forms.

    On the other hand, a common A/B testing mistake made is to create audiences for tests that are too small. Therefore it can take a long time to achieve statistically significant results and tell what impact your change had on a particular website visitors. So it is important to check how large your segments are before starting an experiment to prevent false positives.

    A/B testing & SEO

    Google permits and encourages A/B testing and has stated that performing an A/B or multivariate test poses no inherent risk to your website’s search rank. However, it is possible to jeopardize your search rank by abusing an A/B testing tool for purposes such as cloaking. Google has articulated some best practices to ensure that this doesn’t happen:

    • No cloaking: Cloaking is the practice of showing search engines different content than a typical visitor would see. Cloaking can result in your site being demoted or even removed from the search results. To prevent cloaking, do not abuse visitor segmentation to display different content to Googlebot based on user-agent or IP address.

    • Use rel="canonical": If you run a split test with multiple URLs, you should use the rel="canonical" attribute to point the variations back to the original version of the page. Doing so will help prevent Googlebot from getting confused by multiple versions of the same page.

    • Use 302 redirects instead of 301s: If you run a test that redirect the original URL to a variation URL, use a 302 (temporary) redirect vs a 301 (permanent) redirect. This tells search engines such as Google that the redirect is temporary and that they should keep the original URL indexed rather than the test URL.

    A media company might want to increase readership, increase the amount of time readers spend on their site, and amplify their articles with social sharing. To achieve these goals, they might test variations on:

    • Email sign-up modals
    • Recommended content
    • Social sharing buttons

    A travel company may want to increase the number of successful bookings are completed on their website or mobile app, or may want to increase revenue from ancillary purchases. To improve these metrics, they may test variations of:

    • Homepage search modals
    • Search results page
    • Ancillary product presentation

    An e-commerce company might want to improve their customer experience, resulting in an increase in the number of completed checkouts, the average order value, or increase holiday sales. To accomplish this, they may A/B test:

    • Homepage promotions
    • Navigation elements
    • Checkout funnel components

    A technology company might want to increase the number of high-quality leads for their sales team, increase the number of free trial users, or attract a specific type of buyer. They might test:

    • Lead form fields
    • Free trial signup flow
    • Homepage messaging and call-to-action

    A/B Testing Examples

    These A/B testing case studies show the types of results the world's most innovative companies have seen through A/B testing with Optimizely's Experimentation Solution.