How to Run Effective A/B Tests According to 28 Marketing Experts

Author's avatar Marketing UPDATED Dec 15, 2022 PUBLISHED Feb 21, 2019 13 minutes read

Table of contents

    Peter Caputa

    To see what Databox can do for you, including how it helps you track and visualize your performance data in real-time, check out our home page. Click here.

    With A/B testing, sometimes small changes can yield big returns.

    Want proof? Split tests helped one company boost conversion rates by 550%.

    You don’t need to be a data genius, nor have a mathematics degree, to run A/B tests. Swap a few things and monitor the results–that’s the gist, right?

    Not necessarily; you need to be smarter about the tests you’re running.

    It’s not as simple as swapping a call-to-action button, noticing you get 3 more clicks on the first day, and rolling out the change on every page before bringing out the party banners.

    google-analytics-kpi-dashboard-template-databox

    There are several factors you need to keep in mind when running A/B tests to make sure they’re worth your while, including:

    • Sample size
    • Test parameters
    • Results or statistical significance

    …to name a few.

    It can be confusing to navigate the road to a high-converting page alone, so we asked 28 marketers to share their best tip for running A/B tests.

    From going against the grain to only testing one element at a time, here’s what they said–and the results they’ve seen with their own testing.

    Spot the metrics that need improving

    You’ve got a steady landing page that converts at about 3%.

    Great! That’s above the 2.35% average, but don’t fall into the trap of thinking your conversion rate is the ‘B’-all and end-all of your A/B testing.

    Accelerate Growth‘s Rebecca Drake recommends diving deeper into your metrics, and split-testing to improve the poor results you find: “When I’m trying to decide whether it is worth A/B testing email campaigns and what should be tested, I like to record the numbers and conversion at each stage of the journey on the same spreadsheet: emails sent > opened > clicked > website visitors > transactions > total value of transactions and an overall campaign conversion rate.”

    Rebecca adds: “This is really helpful in being able to spot which conversion rates are low, therefore pointing to a part of the journey where there is an issue which could be the target for an A/B test.”

    Editor’s note: Have an overview of metrics that matter the most to your business in a single marketing reporting software, such as Databox.

    Check every marketing channel, not just on your website

    Granted, your website is a fantastic place to run split tests. You’re in complete control, and it’s your business’ virtual home.

    Why wouldn’t you want to focus on your website, first?

    It’s wise, but I’ve got one word of warning: Don’t fall into the trap of only running A/B tests on your website.

    “One of our most effective methods for split testing, is running paid advertising through Facebook and having different color tests in place”, explains Kristel Staci of BloggingTips.com. “The reason why we like to use FB for split testing, is that it allows us to push a lot of traffic throughout site in a short period of time.”

    She’s not the only one recommending this strategy, either.

    Blogging.org‘s Zac Johnson thinks that “ad copy and headlines are always one of the most crucial split tests to make”–yet both of these elements aren’t always prominent features on a website. You need to look further afield, to a platform like AdWords or Facebook Ads.

    The average company using eight marketing channels, meaning there’s likely to be tons of room for more traffic, engagement and conversions if you’re running A/B tests to maximize each.

    It’s no surprise why marketers run them on email campaigns, paid social and forms:

    most-common-ab-test-location

    Define what you’re expecting to see

    Now you know which metrics you want to improve, it’s time to move onto your hypothesis.

    This is a statement, similar to an educated guess, which summarizes the results you’re expecting to see from your A/B tests.

    Brian Serocke of Beacons Point thinks “it is very important to define a clear hypothesis that identifies the problem you are trying to solve with the test”, because “forming your hypothesis will also guide you in understanding the results you should expect to see from a positive test.”

    He adds: “And don’t forget to document that hypothesis along with the solution, test variable, expected results, and actual results. If you don’t have a mechanism for this already, fire up a Google Sheets spreadsheet and jot down your info. This gives you a place to store historical test data to help you plan your next test.”

    Note: Tracking your test results in spreadsheets? Learn more about Databox’s Google Sheets integration here.

    So, how do you create a hypothesis for your A/B tests?

    Sid Bharath, of Sid Bharath Consulting Ltd, puts it into practice: “If you’re testing out a headline, for example, your hypothesis should be something like – I hypothesize that changing the headline to this will result in a 10% increase in conversions because the variant does a better job of highlighting the pain point.”

    Check your sample size

    You’ve found the metrics you’re testing, and you’ve made an educated guess at the improvement it’ll make.

    Before you set your new changes live, it’s important to check you’ve got a sample size for your test that’s going to show real, meaningful results.

    Sure, applying the changes to a page that receives five daily visitors will show you results–but here’s the thing: They won’t be accurate results if you’ve got a tiny sample size.

    “Always make sure that your sample size is large enough”, explains PACIFIC Digital Group‘s Shawn Massie. “A/B tests can fail when there’s no statistical significance between the datasets. With enough data, your A/B test should be a clear-cut winner.”

    Laura Gonzalez of AutoNation recommends “having a sample size lower than 100 is not enough to determine which test performed better. Having more than 100 will help you have a better understanding of which test was more successful.”

    Trim dead weight with A/B tests

    “Before adding any new elements to a page, email, etc., try removing existing elements to determine which ones are essential and which ones are unnecessary”, explains Chachi Flores of Peacock Alley.

    It’s a unique concept; removing elements in a test, rather than adding them.

    But Chachi recommends this because it trims dead weight, and gives you a clean slate to start with: “You may be losing out on conversions because there are already too many confusing things happening at once and turning away customers. Once you have it down to only the most essential parts, you can test out new elements.”

    Don’t always play it safe

    Unsure which elements you should be testing?

    These marketers recommend going against the grain and testing larger elements, rather than playing it safe.

    John Holloway of NoExam.com says: “Testing small things like button color is fun, but we are after the big wins. One of our most successful tests was testing the form on the left or right side of the page. The left side beat the right by 30%.”

    Team Building Hero‘s Alex Robinson also seconds this–and has seen similar results: “We learned to test the big things instead. Now, we prioritize testing areas like page structure, including (or excluding) entire page elements and key text like headlines and call-to-action statements. These efforts have lead to sustained 300%+ increases in lead conversion rates on our city specific landed pages, as one example of our success with A/B tests.”

    “A tweak in the subject line or text size is safe because you know it won’t give you dramatically negative results, but you won’t get those significantly positive results you’re looking for, either”, agrees Prime Publishing LLC‘s Kristi Kittelson. “It’s important to take major risks to reap major rewards.”

    Summarizing, Fundera‘s Nicolas Straut says: “Of course, you don’t want to be too zany or crazy with your test but addressing the reader in a new or interesting way, asking a very unique question, or using eye-popping but relevant images in your test could drastically increase or click, open, or conversion rates.”

    Test conventional wisdom

    We’ve all heard the same advice:

    • “Fewer form fields lead to more submissions”
    • “Adding the call-to-action to the bottom of a page increases conversions”
    • “Tweets using images get more engagement”

    Granted, some of those tips come from individual A/B testing.

    However, Beth Carter of Clariant Creative Agency, recommends testing this “conventional wisdom” itself.

    She says: “We tested that with a client recently, and we were surprised to find that asking more questions actually improved form conversions! The lesson here is that there are no sacred cows. Just because an expert somewhere said something is true, you don’t know if that will be true for YOUR company until you test it.”

    Stick with testing one element at a time

    If you followed the first snippet of advice we shared here, you might be left with a list of underperforming metrics you want to boost.

    It can be tempting to go full throttle and change everything at once. If it’s not working, you should get rid, right?

    Nope.

    “A common pitfall is that people try to test too many elements at once”, explains Alicia Ward of Flauk. “If you change too many items between each version of the test, it can be difficult to determine which element(s) played the biggest role in achieving your overall goal.”

    She’s not the only one honing-in on this advice.

    Adrian Crisostomo of SEO-Hacker agrees: “When doing A/B testing, you should never rush and test multiple changes at a time because you could never jump to a conclusion that this specific change was positive whilst it could just be affected by the other changes you made. Testing one small change at a time makes it easier to compare and analyze results.”

    Along with Ollie Roddy of Catalyst Marketing Agency: “If you go with two entirely different styles right from the start, you’ll know which one outperforms the other, but you won’t know exactly why. Just changing one or two small things is a great way to find out specifically what works.”

    As does Lola.com‘s Matt Desilet: “If you complicate your tests with extra variables, you’ll never understand the impact of each change you’ve made.”

    But where do you start with nailing which element you should be split-testing?

    Here’s Growth Hackers‘ Jonathan Aufray sharing the process: “For instance, if you want to test your landing pages. Just test the headline first. Create 2 landing pages that are identical where only the headline is tested. This is the only way to gather relevant data and see what works.

    …Including your timing

    “If you stop your A/B test before it reaches your allotted sample size, then you may not have a full-picture idea of what performed better”, explains Jackie Tihanyi of Fisher Unitech.

    Isn’t that defeating the entire point of your A/B test? To learn which elements get the best results?

    Brian Carter of BCG explains why you need to think about the timing of your tests: “You need to eliminate any time-related changes in the behavior of your target audience. Things like news, the economy or other seasonal issues that you don’t know about yet can result in different performance if you don’t run it simultaneously.”

    But if you’re too impatient, don’t have the time or budget to wait for the results to come in, “limit your tests to your highest trafficked pages so you can reach decision thresholds sooner”, recommends LyntonWeb‘s Jennifer Lux.

    Spread the word to your entire team

    Unfortunately, some things are out of your control.

    But the aim of A/B testing is to “focus on a test and minimize extraneous factors that can throw off process and results”, explains Meenal Upadhyay from Fit Small Business.

    That’s why he says “everyone at the company should at least be aware of the A/B test you’re running. This is an ideal way to avoid having different tests overlap and being unable to read direct results from a specific test.”

    …But if you are testing multiple elements, run other A/B tests

    Bridges Strategies‘ Jake Fisher is another marketer who agrees to stick with one element per test.

    However, he adds that “it is better to run multiple A/B tests if you have multiple variables to test.”

    For example: If you’re looking to test the placement, copy and color of your call to actions, don’t do it all in one go. Instead, run three separate A/B tests to check each, and decide which specific change is making the biggest impact.

    Michal Strahilevitz, of the University of Wollongong, puts this into practice using varying images and soundtracks in a video, for example. She says: “If you have four versions, that is what I call AB, AC BA BC testing, where you might have the same music with two different visual images and then each of the two visual images with the two different versions of the soundtrack. This would help you determine what combination of site and sound works best for your video.”

    Get feedback, along with data

    Remember how earlier, we mentioned you should be looking across each marketing channel to find opportunities for A/B testing?

    “When you split test ad creative, such as Google Ads or Facebook Ads, you still pay for every click”, explains Kim Kohatsu of PickFu.

    …You can see how that gets expensive pretty quickly.

    However, Kim recommends PickFu as “an alternative way to test ads”. It works by polling the audience viewing your Facebook Ads to– a testing method which Kim says “is faster, often cheaper, and includes written feedback on *why* one ad one over another.”

    Note: Want an easy way to track costs and conversions from your Facebook Ad campaigns? Download this free template to get started.

    Use statistical significance to define a “win”

    Did you know that greater than 70% of marketers say their a/b tests are successful less than half the time? According to respondents, less than 30% say their tests are successful more than half the time.

    success-of-ab-tests

    Mike Donnelly of Seventh Sense says: “It’s absolutely critical that you have statistically significant results before you can claim a win. Otherwise, you’re relying on gut than actual fact which can steer you in the wrong direction.”

    Optimizely defines statistical significance as “the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance”.

    So, what’s classed as a good result?

    “I’ve seen so many marketers stop their A/B tests at 70% or 80% confidence, which isn’t statistically significant”, explains James Pollard of The Advisor Coach. “I always aim for 95% confidence. In rare occasions (when I would need an exorbitant amount of traffic to get to 95%) I cut the test off at 90%.”

    Dig deeper using new vs. returning visitor reports

    If you’re taking the results from your A/B tests at face value, you might see an overwhelming win.

    …That’s awesome!

    However, Ascend Inbound Marketing‘s Gretchen Elliott recommends splitting your reports into new vs. returning visitors because “a returning visitor will be familiar with your site”.

    She says: “Testing with new visitors should give you a more accurate picture of what’s working versus what isn’t.”

    google-analytics-kpi-dashboard-template-databox
    Author's avatar
    Article by
    Elise Dopson

    Elise Dopson is a freelance B2B writer for SaaS and marketing companies. With a focus on data-driven ideas that truly provide value, she helps brands to get noticed online--and drive targeted website visitors that transform into raving fans.

    More from this author

    Get practical strategies that drive consistent growth

    Read some