Articles on statistical power

A Comprehensive Guide to Observed Power (Post Hoc Power)

Comprehensive Guide to Observed Power

“Observed power”, “Post hoc Power”, and “Retrospective power” all refer to the statistical power of a statistical significance test to detect a true effect equal to the observed effect. In a broader sense these terms may also describe any power analysis performed after an experiment has completed. Importantly, it is the first, narrower sense that […] Read more…

Posted in Statistics | Also tagged , , , ,

What if the Observed Effect is Smaller Than the MDE?

Observed Effect vs MDE

The above is a question asked by some practitioners of A/B testing, as well as a number of their clients when examining the outcome of an online controlled experiment. It may be raised regardless if the outcome is statistically significant or not. In both cases the fact the observed effect in an A/B test is […] Read more…

Posted in A/B testing, Statistics | Also tagged , , ,

Using Observed Power in Online A/B Tests

Observed Power in AB Testing

Observed power, often referred to as “post hoc power” and “retrospective power” is the statistical power of a test to detect a true effect equal to the observed effect size. “Detect” in the context of a statistical hypothesis test means to result in a statistically significant outcome. Some calculators aimed at A/B testing practitioners use […] Read more…

Posted in A/B testing, Statistics | Also tagged , , , ,

Stop AbUsing the Mann-Whitney U Test (MWU)

Mann-Whitney-U Test

The Mann Whitney U Test (MWU), also known as the Wilcoxon Rank Sum Test and the Mann-Whitney-Wilcoxon Test, continues to be advertised as the go-to test for analyzing non-normally distributed data. In online experimentation it is often touted as the most suitable for analyses of non-binomial metrics with typically non-normal (skewed) distributions such as average […] Read more…

Posted in A/B testing, Statistics | Also tagged , , , , , , , ,

False Positive Risk in A/B Testing

False positive risk in A/B testing

Have you heard how there is a much greater probability than generally expected that a statistically significant test outcome is in fact a false positive? In industry jargon: that a variant has been identified as a “winner” when it is not. In demonstrating the above the terms “False Positive Risk” (FPR), “False Findings Rate” (FFR), […] Read more…

Posted in A/B testing, Bayesian A/B testing, Statistics | Also tagged , , , , , , , ,

How to Run Shorter A/B Tests?

Shorter A/B Tests

Running shorter tests is key to improving the efficiency of experimentation as it translates to smaller direct losses from testing inferior experiences and also less unrealized revenue due to late implementation of superior ones. Despite this, many practitioners are yet to start conducting tests at the frontier of efficiency. This article presents ways to shorten […] Read more…

Posted in A/B testing, Statistics | Also tagged , , ,

Comparison of the statistical power of sequential tests: SPRT, AGILE, and Always Valid Inference

Power and Average Sample Size of Sequential Tests

In A/B testing sequential tests are gradually becoming the norm due to the increased efficiency and flexibility that they grant practitioners. In most practical scenarios sequential tests offer a balance of risks and rewards superior to that of an equivalent fixed sample test. Sequential monitoring achieves this superiority by trading statistical power for the ability […] Read more…

Posted in A/B testing, AGILE A/B testing, Statistics | Also tagged , , , , ,

Statistical Power, MDE, and Designing Statistical Tests

Statistical Power and MDE Demystified

One topic has surfaced in my ten years of developing statistical tools, consulting, and participating in discussions and conversations with CRO & A/B testing practitioners as causing the most confusion and that is statistical power and the related concept of minimum detectable effect (MDE). Some myths were previously dispelled in “Underpowered A/B tests – confusions, […] Read more…

Posted in A/B testing, Statistics | Also tagged , , ,

What Can Be Learned From 1,001 A/B Tests?

Meta Analysis

How long does a typical A/B test run for? What percentage of A/B tests result in a ‘winner’? What is the average lift achieved in online controlled experiments? How good are top conversion rate optimization specialists at coming up with impactful interventions for websites and mobile apps? This meta-analysis of 1,001 A/B tests analyzed using […] Read more…

Posted in A/B testing, AGILE A/B testing, Conversion optimization | Also tagged , , , , , , , ,

Fully Sequential vs Group Sequential Tests

Sequential Testing Compared

What is the best design for a statistical test with sequential evaluation of the data at multiple points in time? This is a question anyone who has realized that unaccounted for peeking with intent to stop is the bane of A/B testing eventually comes to ask. So how does one go about answering that? This […] Read more…

Posted in AGILE A/B testing, Statistics | Also tagged , , , ,

A/B Testing Statistics – A Concise Guide for Non-Statisticians

AB Testing Statistics

Navigating the maze of A/B testing statistics can be challenging. This is especially true for those new to statistics and probability. One reason is the obscure terminology popping up in every other sentence. Another is that the writings can be vague, conflicting, incomplete, or simply wrong, depending on the source. Articles sprinkled with advanced math, […] Read more…

Posted in A/B testing, Statistics | Also tagged , , , ,

Underpowered A/B Tests – Confusions, Myths, and Reality

Underpowered A/B Tests

In recent years a lot more CRO & A/B testing practitioners have started paying more attention to the statistical power of their online experiments, at least based on my observations. While this a positive development for which I hope I had contributed somewhat, it comes with the inevitable confusions and misunderstandings surrounding a complex concept […] Read more…

Posted in A/B testing, Statistics | Also tagged , , , , ,