Ever wondered how to tell when your A/B test has seen enough traffic to be statistically "finished?" Some analytics experts believe A/B/A tests, also called simultaneous null testing, offer a solution.
A/B/A testing lets you test one element against another as you would with an A/B test. But, you show the same content to two control groups (diverting traffic 33%/34%/33%). When the two different groups of visitors have the same average order value, conversion, or other result, you declare victory.
"When the two 'A' groups converge and stay converged, you can be relatively certain that the differences in the test are valid differences," says Eric Peterson, author of Web Analytics Demystified.
A/B/A has two shortcomings. First, it can give false positives with insufficient traffic. At a given moment, two bicycles may have the same speed, but if one is riding uphill and the other is riding down, I am being fooled.
Second, it is not sufficient to "generalize" the results. It is interesting that the B content had 15% higher conversion during the test. However, it is really exciting when that conversion rate is reflected in your sales reports for the weeks after the test is complete! Real statistical certainty can help separate out other influences like time of day, day of week, and other factors outside of the experiment.
As a shortcut, however, proponents of the A/B/A testing technique believe it's a sure-fire way to ensure that your test results are legitimate.