# Category Archives: Statistics

## Confidence Intervals & P-values for Percent Change / Relative Difference

In many controlled experiments, including online controlled experiments (a.k.a. A/B tests) the result of interest and hence the inference made is about the relative difference between the control and treatment group. In A/B testing as part of conversion rate optimization and in marketing experiments in general we use the term “percent lift” (“percentage lift”) while in […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistical Significance | | Comments closed

## The Google Optimize Statistical Engine and Approach

Updated Sep 17, 2018: Minor spelling and language corrections, updates related to role of randomization and external validity / generalizability. Google Optimize is the latest attempt from Google to deliver an A/B testing product. Previously we had “Google Website Optimizer”, then we had “Content Experiments” within Google Analytics, and now we have the latest iteration: […] Read More…

Also posted in A/B Testing, Conversion Optimization | | Comments closed

## 20-80% Faster A/B Tests? Is it real?

I got a question today about our AGILE A/B testing calculator and the statistics behind it and realized that I’m yet to write a dedicated post explaining the efficiency gains from using the method in more detail. This despite the fact that these speed gains are clearly communicated and verified through simulation results presented in our AGILE […] Read More…

Also posted in A/B Testing, AGILE A/B Testing | | Comments closed

## Risk vs. Reward in A/B Tests: A/B testing as Risk Management

What is the goal of A/B testing? How long should I run a test for? Is it better to run many quick tests, or one long one? How do I know when is a good time to stop testing? How do I choose the significance threshold for a test? Is there something special about 95%? […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistical Significance | | Comments closed

## Statistical Significance for Non-Binomial Metrics – Revenue per User, AOV, etc.

In this article I cover the method required to calculate statistical significance for non-binomial metrics such as average revenue per user, average order value, average sessions per user, average session duration, average pages per session, and others. The focus is on A/B testing in the context of conversion rate optimization, landing page optimization and e-mail […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistical Significance | | Comments closed

## One-tailed vs Two-tailed Tests of Significance in A/B Testing

The question of whether one should run A/B tests (a.k.a online controlled experiments) using one-tailed versus two-tailed tests of significance was something I didn’t even consider important, as I thought the answer (one-tailed) was so self-evident that no discussion was necessary. However, while preparing for my course on “Statistics in A/B Testing” for the ConversionXL […] Read More…

Also posted in A/B Testing, Conversion Optimization, Statistical Significance | | Comments closed