Articles on test duration

How to Run Shorter A/B Tests?

Shorter A/B Tests

Running shorter tests is key to improving the efficiency of experimentation as it translates to smaller direct losses from testing inferior experiences and also less unrealized revenue due to late implementation of superior ones. Despite this, many practitioners are yet to start conducting tests at the frontier of efficiency. This article presents ways to shorten […] Read more…

Posted in A/B testing, Statistics | Also tagged , , ,

What Can Be Learned From 1,001 A/B Tests?

Meta Analysis

How long does a typical A/B test run for? What percentage of A/B tests result in a ‘winner’? What is the average lift achieved in online controlled experiments? How good are top conversion rate optimization specialists at coming up with impactful interventions for websites and mobile apps? This meta-analysis of 1,001 A/B tests analyzed using […] Read more…

Posted in A/B testing, AGILE A/B testing, Conversion optimization | Also tagged , , , , , , , ,

20-80% Faster A/B Tests? Is it real?

Percent Runs and Stopping Stage 1Delta

I got a question today about our AGILE A/B testing calculator and the statistics behind it and realized that I’m yet to write a dedicated post explaining the efficiency gains from using the method in more detail. This despite the fact that these speed gains are clearly communicated and verified through simulation results presented in our AGILE […] Read more…

Posted in A/B testing, AGILE A/B testing, Statistics | Also tagged , , , , , , ,

Risk vs. Reward in A/B Tests: A/B testing as Risk Management

Risks vs Rewards in AB Testing

What is the goal of A/B testing? How long should I run a test for? Is it better to run many quick tests, or one long one? How do I know when is a good time to stop testing? How do I choose the significance threshold for a test? Is there something special about 95%? […] Read more…

Posted in A/B testing, Conversion optimization, Statistical significance, Statistics | Also tagged , , , , , ,