# Articles onBayesian A/B testing

In-depth exploration of proposed Bayesian approaches to online A/B testing, with a focus on exposing their shortcomings, false presentations, and slandering of alternative schools of thought and statistical methods.

## False Positive Risk in A/B Testing

Have you heard how there is a much greater probability than generally expected that a statistically significant test outcome is in fact a false positive? In industry jargon: that a variant has been identified as a “winner” when it is not. In demonstrating the above the terms “False Positive Risk” (FPR), “False Findings Rate” (FFR), […] Read more…

Also posted in A/B testing, Statistics |

## Bayesian Probability and Nonsensical Bayesian Statistics in A/B Testing

Many adherents of Bayesian methods put forth claims of superiority of Bayesian statistics and inference over the established frequentist approach based mainly on the supposedly intuitive nature of the Bayesian approach. Rational thinking or even human reasoning in general is Bayesian by nature according to some of them. Others argue that proper decision-making is inherently […] Read more…

## Frequentist vs Bayesian Inference

In this article I’m revisiting* the topic of frequentist vs Bayesian inference with specific focus on online A/B testing as usual. The present discussion easily generalizes to any area where we need to measure uncertainty while using data to guide decision-making and/or business risk management. In particular, I will discuss each of the following five […] Read more…

Also posted in A/B testing, Statistics |

## The Google Optimize Statistical Engine and Approach

Updated Sep 17, 2018: Minor spelling and language corrections, updates related to role of randomization and external validity / generalizability. Google Optimize is the latest attempt from Google to deliver an A/B testing product. Previously we had “Google Website Optimizer”, then we had “Content Experiments” within Google Analytics, and now we have the latest iteration: […] Read more…

## 5 Reasons to Go Bayesian in AB Testing – Debunked

As someone who spent a good deal of time on trying to figure out how to run A/B tests properly and efficiently, I was intrigued to find a slide from a presentation by VWO®’s data scientist Chris Stucchio, where he goes over the main reasons that caused him and the VWO® team to consider and […] Read more…

## Bayesian AB Testing is Not Immune to Optional Stopping Issues

Fantasy vs the Real World: Naive Bayesian AB Testing vs Proper Statistical Inference This post is addressed at a certain camp of proponents and practitioners of A/B testing based on Bayesian statistical methods, who claim that outcome-based optional stopping, often called data peeking or data-driven stopping, has no effect on the statistics and thus inferences […] Read more…

#### Browse by year

The book on user testing

Take your A/B testing program to the next level