You have paid thousands for a new landing page treatment, and you set up an A/B test against the existing version to make certain that it really tantalizes the customers. After the first day of testing, the new version is making waves. “Our new design is resulting in 10 percent more sales in the first day!” you hear. “Let's stop wasting traffic on the old design and cut off the test!”
Don’t fall for this. No traffic is wasted if a test has not run long enough to give you confidence in the results.
When you test, you are trying to find out what marketing elements actually affect the behavior of a segment of customers. To feel confident, you need to run the test until you are certain that improved conversion is actually being caused by the promotion, copy, or navigation you are testing.
And this is very, very tricky to do with a rule of thumb.
If you are lucky, and are testing an element that has a massive impact, then your tool should indicate statistical significance very quickly. You should still run the test for two weeks, however, just to make certain that customer behavior on weekends versus weekdays doesn’t fundamentally change those results.
Many very useful marketing changes, however, have a less obvious “signal” -- it may provide real value, but it requires more traffic to validate. This is where the deliberate application of statistics to determine confidence is absolutely necessary. It may take 4-6 weeks, but waiting matters.
If you can’t get confidence within 4-6 weeks, an element that shows improved conversion might be “directional” -- it suggests you are on the right track, and just need more refining.
It's a bit like fishing: you might fish for awhile without any luck, but with each cast, you learn something. You try this fly, you try that spot on the river. When the fish goes after your fly, that is directional. When the fish is on your plate, that is confidence.