AFTER 30+ YEARS AND THOUSANDS OF CAMPAIGNS, WE’VE LEARNED A LOT. TO PUT OUR LEARNING TO WORK, SIGN UP FOR OUR BLOGS AND NEVER MISS A POST!

A/B or A+B testing?

Print Friendly, PDF & Email

August 27th, 2018

A/B testing is the most common testing method used by marketers, but it can sometimes overlook an opportunity.

By definition, A/B testing is when you alter one aspect of an appeal, determine the result, and rollout the winner to a wider audience. Bias is removed by ensuring the audiences are randomly split and the messages are sent at the same time, with results tested for statistical significance before conclusions are made.

So what could possibly be wrong about this test design?

MarkeTeam recently tested two acquisition approaches for one of our clients, with completely different thematic platforms. The clear winner had a 15% lift in response and a 10% lift in the average gift, so the obvious decision would be to roll it out to the full audience. That’s where it got interesting …

We took a closer look at the results by list source and saw significant swings in the response for the test package. By pulling a profile on the respondents, we discovered significant differences in age, geographic location, political affiliation, wealth, and lifestyle between the two packages—enough difference to justify building a response model.

Bias is removed by ensuring the audiences are randomly split and the messages are sent at the same time, with results tested for statistical significance before conclusions are made.

The model predicted the likelihood to respond to one approach versus the other, using a variety of data sources. It scaled beautifully, with over a 100% lift in response for the top 2 deciles and more than a 50% decrease for the bottom 2 deciles.

About 60% of the volume had an increase in response to the test package, while 40% preferred the previous control. We have since utilized the model to optimally send both packages across the entire acquisition audience, splitting the volume by the modeled data, resulting in an additional double-digit lift compared to the original winning test response.

The learning from this example? Significant opportunities could be hiding in A/B test results, especially when there is a large swing in response rates. By not assuming that the audience is homogeneous, alternative communications can be developed to significantly impact response.

By Andy Johnson | Vice President

Share This Post

SIGN UP FOR OUR BLOG