Category Archives: Statistical Significance

Designing successful A/B tests in Email Marketing

The process of A/B testing (a.k.a. online controlled experiments) is well-established in conversion rate optimization for all kinds of online properties and is widely used by e-commerce websites. On this blog I have already written in depth about the statistics involved as well as the ROI calculations in terms of balancing risk and reward for […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistics | Tagged , , , , , , , , , , | Leave a comment

Confidence Intervals & P-values for Percent Change / Relative Difference

In many controlled experiments, including online controlled experiments (a.k.a. A/B tests) the result of interest and hence the inference made is about the relative difference between the control and treatment group. In A/B testing as part of conversion rate optimization and in marketing experiments in general we use the termĀ “percent lift” (“percentage lift”) while in […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistics | Tagged , , , , , , , , , | Leave a comment

Risk vs. Reward in A/B Tests: A/B testing as Risk Management

Risks vs Rewards in AB Testing

What is the goal of A/B testing? How long should I run a test for? Is it better to run many quick tests, or one long one? How do I know when is a good time to stop testing? How do I choose the significance threshold for a test? Is there something special about 95%? […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistics | Tagged , , , , , , , | Leave a comment

Statistical Significance for Non-Binomial Metrics – Revenue per User, AOV, etc.

Non-Binomial Significance - Revenue, Per User Metrics

In this article I cover the method required to calculate statistical significance for non-binomial metrics such as average revenue per user, average order value, average sessions per user, average session duration, average pages per session, and others. The focus is on A/B testing in the context of conversion rate optimization, landing page optimization and e-mail […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistics | Tagged , , , , , , , , | 4 Responses

One-tailed vs Two-tailed Tests of Significance in A/B Testing

Two-tailed vs one-tailed test

The question of whether one should run A/B tests (a.k.a online controlled experiments) using one-tailed versus two-tailed tests of significance was something I didn’t even consider important, as I thought the answer (one-tailed) was so self-evident that no discussion was necessary. However, while preparing for my course on “Statistics in A/B Testing” for the ConversionXL […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistics | Tagged , , , , , , , , | 1 Response

The Case for Non-Inferiority A/B Tests

The Case for Non-Inferiority Testing

In this article, I explore the concept of non-inferiority A/B tests and contrast it to the broadly accepted practice of running superiority tests. I explain where non-inferiority tests are necessary and how a CRO/LPO/UX testing specialist can make use of this new approach to A/B testing to run much faster tests, and to ultimately achieve […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistics | Tagged , , , , , | Comments closed