Do I have to wait until this test ends to start that one?
Testing Velocity
Increasing the number of tests you run each year is key to a successful A/B testing and conversion rate optimization program. It just makes sense. If you run more tests, you’ll typically get more wins.
So, how do you run more tests without having to wait for one test to end before starting the next one? Run more than one at the same time. You might ask, “But won’t that call the results of both tests into question?” That’s sometimes true. However, it’s not when you plan your tests strategically using the following guidelines.
Rules that dictate which tests you can safely run at the same time include the concepts of Participation, Interaction, and Proximity, or “PIP.”
Participation
What is the chance that a visitor who participates in one test will also participate in another test? Naturally, if a visitor can’t participate in multiple tests, there’s no chance for interaction between those tests which could call their results into question.
Interaction
Are the nature of the tests such that participating in two (or more) tests would interact within a user’s experience? An example of negative interaction would be that on an eCommerce site, you typically wouldn’t want to run one test which gave a 10% off discount on one page and a 20% discount on another page. The confusion could really hurt your conversion rate, and confused visitors tend to do a terrible thing… nothing.
Positive interaction is also possible. If the messaging of an element being tested in one test is complementary to that of a second test, visitors participating in both tests may be influenced even more strongly than if they participated in only one of the tests.
To avoid either a negative or positive impact of multiple tests interacting with each other, the elements and messaging tested in one test should typically be unrelated in nature to others running at the same time. If both test selling points of your product, they’re more likely to interact than if one tests a selling point and another tests whether a carousel/slider works better than static images.
The kinds of things you can test include the following test types:
- Navigation – How do visitors find their way around your site (search features, menus, etc.)
- Layout – How things are arranged or presented
- Messaging – The words and images that tell visitors why they should act
- Functionality – How visitors interact with elements on a page
- Inclusion/Exclusion – Which elements exist on the page
To reduce the chances of interaction between tests, one strategy is to not run multiple A/B tests in the same test type at the same time.
Proximity
How close together in the visitor journey are the elements being tested in each test? If two parts of the homepage are being tested in two separate tests, there is more likelihood of interaction than if one element being tested is on the homepage and the second element is on a page that is several pages deep into the user experience or on an unrelated site section.
Testing Tracks
Proximity has the greatest effect on potential interaction between tests. For this reason, many successful testing programs schedule their tests by considering which pages or site sections are affected by each test. A typical eCommerce site’s pages fall into categories like the following:
- Homepage
- Campaign-specific landing pages
- Product category pages
- Product detail pages
- Shopping cart
- Order confirmation page
- Internal search results page
With a little strategic planning, each of these could potentially have a test running on them at the same time with a minimal chance of interference. Each could have its own “testing track,” or set of tests that we run in these site sections.
A good Testing Roadmap document includes a schedule of current and upcoming tests. Categorize your A/B tests into testing tracks – one for the homepage, one for product detail pages, and so on. Maintain a queue of tests that are ready to go for each track, so as soon as the current test in one testing track concludes, you have another one ready to launch.
Multiple Concurrent Tests = More Wins
Breaking your A/B tests into testing tracks and always keeping a test running in each track is key to improving your testing velocity and the overall effectiveness of your conversion rate optimization program.
This is particularly important for sites with relatively low traffic. If you only have enough visitors to run a test with statistical confidence every 6 weeks or so you may only be able to run 8 tests per year. On the other hand, if you can keep four testing tracks going at all times, you might be able to run 32 tests per year.
Of course, we’re oversimplifying somewhat, but the point is that running more tests by using testing tracks will greatly increase your chance for significant improvements to your conversion rate… and to your bottom line.
Testing tracks are just one way that Perficient Digital helps our clients to fuel successful conversion rate optimization (CRO) programs. If you feel like you are already squeezing every sale or lead out of your website, we hope this article was useful, but maybe you’re open to considering some new ideas and insights. If so, contact us for a complimentary evaluation of your conversion rate optimization program.