**Google Optimize** is the latest attempt from Google to deliver an A/B testing product. Previously we had “Google Website Optimizer”, then we had “Content Experiments” within Google Analytics, and now we have the latest iteration: Google Optimize. While working on the integration of our A/B Testing Calculator with Google Optimize I was curious to see what this newest product had to offer in terms of **statistics**. I knew that it would likely be using Bayesian inference, similar to its predecessor, but I was surprised by **how little was shared about the statistical method**: how they calculate statistical significance or the corresponding Bayes Factors / Posterior Probabilities, called “probability to beat baseline” and “probability to be best” in the user interface. It’s basically a black box that you have to not only trust, for some reason, but guess the inner-workings of, if you are to be able to use it in practice.

What was even more surprising was the amount of **bashing at frequentist methods**, including misinterpretation / misinformation about p-values, null-hypothesis statistical tests, and frequentist inference as a whole. If you’ve read this blog for a while, you know I don’t shy away from responding to and debunking such stuff, e.g. 5 Reasons to Go Bayesian in AB Testing – Debunked and Bayesian AB Testing is Not Immune to Optional Stopping Issues. In this post I will poke at the statistical approach used in Google Optimize using the scant information provided, and also address some of the criticisms to frequentist approaches to A/B testing, which comprise a significant part of their documentation.

In this article I will use citations from the Google Optimize “Statistics and methodology FAQ” pages, as they exist at time of writing (Apr 9, 2018).

## What statistics are used in Google Optimize?

“…we use Bayesian inference to generate our statistics” is about as much as you can learn from the FAQ. No particular methods, mathematical formulas, assumptions about distributions or “non-parametric” approaches, simulations, proof of concept, or any other kind of deeper explanation or justification is provided for what is presented.

In practice, no one outside a select few Google Optimize engineers can tell you what is under the hood. While this can be problematic for certain more advanced frequentist approaches, it is even worse for a Bayesian one, since it becomes practically impossible to untangle the statistics from the assumptions made by the engineers, and hence – to really understand what the data has to tell us.

I have elaborated on that in addressing claim #2 in 5 Reasons to Go Bayesian in AB Testing – Debunked , but I will restate why, briefly. It is because while frequentist inference works with minimal assumptions and tells us the worst-case scenario probability of observing a result as extreme, or more extreme than the one we observed during an A/B test:

P(Var A > Var B | X_{0}), or in some cases P(Var A = μ > Var B | X_{0})

where X_{0} is some observed value of a test statistic X and μ is a specific average value, Bayesian inference, on the other hand, tells us:

*“Given that I have some prior knowledge or belief about my variant and control, and that I have observed data X _{0}, following a given procedure, how should data X_{0 }change my knowledge or belief about my variant and control?”.*

Expressed using the original notation it looks like so:

P(x > Δe | null hypothesis) · P(null hypothesis) / P (x > Δe)

or in a more verbal expression:

P( Observed Difference between A & B > minimum difference of interest | Variation A = Variation B) · P(Variation A = Variation B) / P (Observed Difference between A & B > minimum difference of interest)

Thus, in a Bayesian approach we have utility or prior beliefs are mixed with the data, causing all kinds of issues, even when the belief is about lack of knowledge (“I know nothing”) – expressing “lack of knowledge” as probability is not as trivial as it sounds. Put simply, with a black box such as the current Optimize system, you **do not know what prior knowledge, beliefs, and assumptions are baked into your statistics**. This issue is not limited to Google Optimize, but is present in other system in which a Bayesian statistical engine is offered without giving enough information to understand the assumptions that went into it.

The Optimize FAQ states that “Bayesian inference is an advanced method of statistical analysis…”, but the fact that it is complicated mathematically **does not make it advanced**. Bayesian inference in fact predated modern statistical inference by over 150 years, if we count from the earliest attempts, so it is not really advanced. It was used to varying extents in the 19-th and early 20-th century and its inadequacies in giving error estimates for certain inference situations, namely experiments and trials, was what prompted Sir Ronald Fisher to develop Randomized Controlled Trials and frequentist inference with p-values and statistical significance thresholds, which was later improved upon by Neyman and Pearson, and others. This is the preferred method for inferring truths about the state of the world.

The documentation states “With respect to winners and conversion rates, we do our best to use *uninformative priors* — priors with as little influence on the experiment results as possible.”. There is no such thing as an uninformative prior, but there are minimally informative priors, and I suppose this is what is referred to here. But if Google Optimize were to use minimally-informative priors, then the results should match those of an equivalent frequentist method (same hypothesis, error control, etc.), rendering **all claims of superiority null and void**.** **The only gains of efficiency can come through adding more assumptions in the form of informative priors.

The use of uninformative priors will also render any claim for easier interpretation false – if the math is the same, and the result is the same, how is adding more complexity to the interpretation any better? Have you tried explaining the so-called “non-informative” priors, over which there is no consensus among Bayesians, by the way, to a non-statistician?

## What Bayesian models are used in a Google Optimize A/B test?

In the FAQ, we read “With Bayesian inference, however, we’re able to use different models that adapt to each and every experiment”. Sounds great, in theory, with the “minor” caveat that you don’t really know which model is used for which experiments. Furthermore: how easy are those models to understand? Who has time to study all these models and learn to interpret their output? If you think p-values and NHST is hard, I don’t think you’d appreciate the complexity of these things, even if you had the necessary information before you.

The Optimize help documentation then states three models “we’ve used”: hierarchical, contextual, and restless models, but at least for me it is not clear if these are still in use in the engine, if they are used alone or in combination, and what their assumptions are, aside from a one-sentence description. From that sentence and the extended example for the hierarchical model I can conclude that results from the hierarchical model are likely to be heavily biased by early effects, no matter what the true reason for these effects is – “newness” or something entirely different, but that’s about all one can do with so little information.

“We do use some priors that are more informative, such as with our hierarchical models. In those models, we use priors that help experiments with really consistent performance to find results more quickly.” – from this sentence it becomes clear where at least some of the broad efficiency claims made by Optimize are coming from. There is no magic way to torture the data into telling more than is in it, and retain the same level of accuracy and objectivity.

While the assumption that the cause of early beneficial effects is some kind of a “newness” effect can result in faster testing and better prediction when it is correct, one has to ask what happens **when that assumption is incorrect**. After all, straight up assuming your variant is better than the control is the fastest and cheapest way to test one can devise: one can just **skip the A/B test altogether given a strong enough assumption**.

## Debunking the Google Optimize FAQ

There are many, many claims for the superiority of Bayesian methods and the inferiority of frequentist methods in the Google Optimize documentation. I will briefly address **12** of them.

**Claim #1**: “We can state the probability of any one variant to be the best overall, without the multiple testing problems associated with hypothesis-testing approaches.”

**Response**: So can any properly-powered frequentist method (e.g. you can choose to use “Conjunctive power” when designing an AGILE A/B test), but the cost associated with such a decision in terms of how long a test will need to be, versus the expected benefits, are enormously in favor of just looking for what is superior to the baseline. A much less costly follow-up test can then be done between the top variants, if there is significant justification for that.

**Claim #2**: “Bayesian methods allow us to compute probabilities directly, to better answer the questions that marketers actually have (as opposed to providing p-values, which few people truly understand).”

This claim is repeated as an answer to “Aren’t frequentist methods good enough?” in this way: “But most frequentist methods provide p-values and confidence intervals that ignore any other effects, and don’t directly answer questions that testers are really asking.”.

**Response**: That’s 2 for the price of 1! First, Bayesian methods do not “better answer the questions that marketers actually have” – in fact they answer questions no marketer can even dream of, unless he is also a trained statistician, in which case why would he want to use a black box? This point is very well explained while responding to claim #1 in 5 Reasons to Go Bayesian in AB Testing – Debunked.

Second, if few people truly understand p-values and NHST, which are such **simple concepts with minimal assumptions**, then how do you expect people to understand the more complex Bayesian inference, of which probabilities much like the p-value are only a building block of? Sadly, this is a trite remark that continues to make zero sense to me. If you want to learn about p-values, read my guide to understanding p-values and statistical significance in A/B testing. Frequentist methods to not ignore “other effects”, as explained in the response to claim #4 below.

This point is further argued in the documentation with statements like “P-values are difficult to explain intuitively.” and “Even in combination with additional data, p-values can easily be misinterpreted.”, which only puts more salt in the Bayesian interpretation wound. If p-values are so hard, how can you expect someone to understand the output of the complex Bayesian machinery? A nice illustration of the difficulty can be found on Prof. Mayo’s post on interpreting Bayesian statistics here.

**Claim #3**: “With traditional testing methods, numerous assumptions are made that treat experiment results with a “one-size-fits-all” approach.”

**Response**: This sentence is a contradiction by itself. More assumptions equal a more customized approach, not less. While one can argue that traditional testing methods, by which I assume something like N-P NHST is meant, are a “one-size-fits-all” approach, that is only so if you look at the simplest methods which have **very few assumptions**. They offer a **trade-off** in that they are simple to use, understand, and communicate, and are objective as far as making few assumptions that need to be justified. However, there is nothing that says an A/B testing practitioner should limit themselves to those methods and they can’t insert more assumptions in the statistical evaluation of any particular A/B test. There is a **cost**, though: the statistics become harder to calculate, interpret, and communicate to stakeholders.

**Claim #4**: “In the real world, users don’t always see a variant just once and then convert. Some users see a variant multiple times, others just once. Some users visit on sale days, some visit other days. Some have interacted with your digital properties for years, others are new. Our models capture factors like these that influence your test results, whereas traditional approaches ignore them.”

**Response**: The existence of unlimited sources of variation is well understood and I’m not certain what is implied here. Is it that **randomization** doesn’t work to distribute the variation caused by all these factors about evenly among the control and variants, on average, and especially at sample sizes such as those often encountered in A/B testing? Or is it that they do some kind of automatic blocking (as in “blocked RCT”) on behalf of their users? As far as I know, a lot of the contribution of R.Fisher and frequentists as a whole is to **not ignore** those things, starting from his “Design of Experiments” book and onward. There are blocked RCTs which are designed explicitly to improve efficiency by using known factors, but as far as I understand they are mostly beneficial in experiments with small sample sizes on things that have well-understood factors – both of these to do not apply to your typical A/B test.

The above is followed by a couple of unsupported claims that are mainly of the form: we make a lot of assumptions, so your tests can complete faster. As said above: the fastest test is to just assume whatever you prefer and skip testing entirely…

**Claim #5**: “Experimenters want to know that results are *right*. They want to know how likely a variant’s results are to be best overall. And they want to know the magnitude of the results. P-values and hypothesis tests don’t actually tell you those things!”

**Response**: Only people who have never heard of uncertainty of measurement and uncertainty of prediction want to know they are “*right*“. Experienced people want to **measure / estimate the uncertainty**. People new to A/B testing might want to know which variant is best, but experienced conversion rate optimization practitioners understand cost vs. benefit, risk vs. reward, and ROI calculations, so they know in most cases they would be happy to find a variant which is better than the control in a timely manner, and only then, maybe, worry whether Variant C, which was a little bit worse than the winner during the A/B test might perform better in reality.

As for “P-values and hypothesis tests don’t actually tell you those things!” – neither does Bayesian statistical inference! (I can do exclamation marks, too!). Check out the responses to claim #1 and claim #2 in 5 Reasons to Go Bayesian in AB Testing – Debunked post for more.

**Claim #6**: “Experimenters like to look at tests frequently, resulting in the “multiple look” problem. Acting on early data in a frequentist framework can result in incorrect decisions.”

**Response**: Surely they do, since it is ROI-positive to evaluate the data as is gathers, in many cases. This is **not a problem** if you not know your tools and don’t commit errors about which your are warned literally everywhere. For example, with an AGILE A/B Test you can control for the error introduced by multiple looks at the data.

However, I am yet to see a proper, widely accepted solution to this problem in a Bayesian testing framework, which of, course, assumes that Bayesians stop arguing their methods are not affected by optional stopping (see Bayesian Inference is Not Immune to Optional Stopping Issues for more). If you have such an example, please, post it in the comment section. The part of the Google Optimize FAQ that says “Because we use models that are designed to take into account changes in your results over time, it’s always OK to look at the results. Our probabilities are continually refined as we gather more data.” does not count. I can only hope they are not saying Optimize is immune to optional stopping issue as it doesn’t alter their Bayes Factors…

**Claim #7**: “Off-the-shelf approaches to testing assume that results aren’t affected by time, even though most experiments change as users react to new content or change behavior throughout the course of an experiment. Consequently, many experimenters find that test results don’t hold over time, even after finding a confident winner. In addition, cyclical behavior, such as differences between a weekday and weekend, often affect results, and ignoring those cycles can lead to incorrect conclusions.”

**Response**: Google themselves describe a possible solution to detecting novelty, learning and reinforcement effects in their very nice 2015 paper: “Focusing on the Long-term – it’s Good for Users and Business” by Hohnhold, OBrien & Tang. However, the approach described in it does not appear to be used in Google Optimize, furthermore – it is only applicable to experiments that do not alter UX flows and the front end as a whole – it is suitable for ranking algorithm testing, etc. Has Google invented something revolutionary without me hearing about it?

I’d argue that the reason most findings “even after finding a confident winner” do not hold over time are due to **poor statistical practices, issues with test analysis, issues with testing tools and data integrity**, etc. and only as a **last reason** I would put the changes in user behavior over time. Illusory results that do not hold should not be a surprise to anyone.

Cyclical behavior is **super easy to avoid** by using proper 7-day cycles when planning A/B tests and analyzing data, which is recommended by me and many others in the field.

**Claim #8**: “A frequentist approach would require you to stand still and listen to it [the mobile phone] ring, hoping that you can tell with enough certainty from where you’re standing (without moving!) which room it’s in. And, by the way, you wouldn’t be allowed to use that knowledge about where you usually leave your phone.”

**Response**: This is a part of an attempt to explain the difference between Bayesian and frequentist inference in the Google Optimize FAQ. It is such a ridiculous strawman and it has been responded to so many times over the decades, including recently by Prof. Mayo in her wonderful 2010 work with Prof. Spanos “Error Statistics”, that I won’t spend any more time on this.

**Claim #9: **“Because Bayesian methods directly compute the relative performance of all variants together, and not just pairwise comparisons of variants, experimenters don’t have to perform multiple comparisons of variants to understand how each is likely to perform. Additionally, Bayesian methods don’t require advanced statistical corrections when looking at different slices of the data.”

**Response**: I can only hope this is not a claim in the spirit of “Bayesian inference is immune to optional stopping” transferred to multivariate testing and segmentation. Prof. Gelman, if I am not mistaken wrote on the topic why these are an issue for Bayesian inference. I am not entirely sure what the paragraph means though, it could be just my limited knowledge of Bayesian methods, or it could be the lack of detail in the FAQ. Let me know what you think in the comments.

**Claim #10**: “So, we’re often faster when your data is consistent, particularly in low-volume environments, and more accurate when it’s not.”

**Response**: As I said already: pile up the assumptions and you can rid of testing altogether. Why would you assume that more consistent data during the test or early in the test is informative of its future behavior? Why assume the opposite if it is not? Has Google Optimize found a way to extract more information than is maximally contained in a set of data? As far as I’m aware there is good reason why N-P tests are called Uniformly Most Powerful.

**Claim #11**: “Multivariate testing:** **Optimize’s approach can learn about both the performance of combinations against each other, and about the performance of a variant across various combinations. As a result, we can run all combinations, but find results much more quickly than an equivalent A/B test.”

**Response**: How can one compare an “A/B test” with an MVT test? Maybe several A/B tests? And why, when an MVT is handled just fine in a frequentist A/B testing tool. How is the Google Optimize approach better than a Dunnett’s adjustment and the corresponding sample size calculations for a frequentist MVT? No simulation, no expected efficiency gains, no formulas, nothing is provided to back that claim up. It is not even clear what it is being compared to. Furthermore, if MVT is to be understood as a factorial design experiment – this is never a better option compared to a classical MVT (A/B/n), since in A/B testing we rarely care solely about the effect of a single factor. If we did, then we can actually run much faster test by just making them factorial, but then – why not just run multiple concurrent A/B or MVT tests, which is much more flexible and efficient?

**Claim #12**: “The definition of p-value is: the probability of getting results at least as extreme as the ones you observed, given that the null hypothesis is correct, where the null hypothesis in A/B testing is that the variant and the control are the same.”

**Response**: The first part is obviously, correct, but the second part is something many Bayesians claim and I am starting to think that it is because they have never, ever, used a frequentist test in their life. There is literally no practical problem in which the null hypothesis is that the variant and the control are the same, as I argue in One-tailed vs Two-tailed Tests of Significance in A/B Testing. In superiority A/B test the null is that the variant is worse than or equal in performance to the control, while the alternative is that the variant is better (composite, not point hypothesis). In a non-inferiority case, or a strong-superiority case, or a reverse risk case, these hypothesis change accordingly. While there might be vendors who offer two-tailed tests, I argue they are wrong in doing so and that no frequentist statistician worth his salt uses two-tailed tests as something other than a basic educational device.

## Some other gems from the documentation

“Unlike frequentist approaches, Bayesian inference doesn’t need a minimum sample.”, yet, strangely, there is a 2-week minimum duration for tests in Google Optimize. Hm… And yes, I know they justify it by cyclic behavior, but if the fix is to set a 2 week minimum in Google Optimize, then why is such a fix out of reach for frequentists, as claimed above, is a mystery to me.

“Tools that allow for predicting experiment length assume that there’s no variability or dependence on time. This is seldom true in real experiments.” – not true for any tool I know of. It is dealt with by randomization which is highly likely to result in spreading that variability about equally across the variants and control. A simple visual examination of the trend of the data is enough to detect obvious time trends, while external information will be needed to decide if these are to be taken into account in the final judgement of the test results.

“with low traffic, you can still find actionable results” – same with frequentist methods, but your A/B Testing ROI might not be great, or it can be negative, prompting you to just implement without testing.

“Why does Optimize show a range of numbers for “improvement”? Most tools don’t do that.” – I literally can’t think of a major tool that doesn’t show confidence intervals or some kind of Bayesian credible interval. Can you?

## Concluding remarks

Despite what you might come off thinking after reading the above, I **don’t mind Bayesian approaches to testing in general**. I’m just yet to see a good one that a person other than the engineers who built it can use properly. I would be grateful if someone can point me to a proper Bayesian solution for modeling sequential monitoring of the data (Wald’s SPRT is frequentist), since it is one of the major hurdles in front of all solutions I have reviewed so far. I’d be interested if someone can point me to a Bayesian approach which is more efficient and equally or more precise than the fully frequentist AGILE method (numbers vs. fixed-size tests are available in the white paper), or equivalent approach. I would be happy to see documentation for a good Bayesian A/B testing tool in which the statistics are so clearly and easily explained that I can understand them well enough to be able to use them.

What I **do mind** is when the methods, foundational to the growth of many 20-th century sciences are not just questioned and smeared, but also straw-manned, only the simplest of simplest approaches are considered, trite critiques to which responses have been given hundreds of times over the decades are presented as fact, and so on.

I do mind when claims of superiority, improved efficiency and more accurate results, of which the Google Optimize documentation abounds as evident above, are presented without any kind of explanation, justification, exact or even approximate comparisons to other established methods (no numbers are given to quantify the broad claims), and not a single simulation result is given. I understand that Google Optimize is not open source, so I do not expect to see all its code anytime soon, but there are ways to communicate and prove your competitive advantage other than posting your code.

I do not think it is justified to present a technology without any hint of its shortcomings and the **trade-offs** involved in its usage. You would not want to lead with those, obviously, but here we are discussing the technical FAQ, not a front page or an ad banner.

I strongly believe doing the above is doing a disservice to the larger UX, conversion optimization and A/B testing community. And I see no good reason for why it is done.

The Google Optimize Statistical Engine and Approach by Georgi Georgiev