Ronny Kohavi &

PRESENT

Accelerating innovation with
a/b testing

Accelerate innovation using trustworthy A/B testing

Taught by former Microsoft, Airbnb, and Amazon exec, and Co-Author of  best selling book: Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

All live sessions that address your needs and questions - no pre-recorded classes

Join a cohort of executives who want to make their organizations data-driven

Dr. Ronny Kohavi

former executive at microsoft, amazon, airbnb & best-selling author

5 x 2 hr sessions

Dates: 8AM or 4PM PST (your choice of time) on December 6, 7, 9, 13, 15

Price: $1,100 per seat (expense through L&D)

Oops! Something went wrong while submitting the form.

Learn from a proven

industry expert

Technical Fellow & VP at

Technical Fellow & Corporate VP at

Director of Data Mining & Personalization

Designed for executives looking to make their organizations

Data Driven

Product Executives

Product executives who want to make the organization more data-driven, where the product has at least tens of thousands of users/month.

Growth Executives

Executives and senior managers focused on growth (e.g., increasing monthly active users and their usage, reducing attrition, improving monetization)

The motivation and basics of A/B testing (e.g., causality, surprising examples, metrics, interpreting results, trust and pitfalls, Twyman’s law, A/A tests)
Cultural challenges, humbling results (e.g., failing often, pivoting, iterating), experimentation platform, institutional memory and meta-analysis, ethics
Hierarchy of evidence, Expected Value of Information (EVI), complementary techniques, risks in observational causal studies

Engineering Leaders

CTOs and VPs of Engineering who are using or planning to increase the use of Machine Learning

Safe deployments
Triggering, especially in evaluating machine learning models
The benefits of agile product development

JOIN US

Be part of the future of online learning

Meet your

Instructor Dr. Ronny Kohavi

Few people have accumulated as much experience as Ronny Kohavi when it comes to experimentation. His work at tech giants such as Amazon, Microsoft and Airbnb — just to name a few — has laid the foundation of modern online experimentation. He is a pioneer and leader of the data mining and machine learning community and wants to share his learnings with you.

Over the last 20 years, Ronny  has served in numerous roles at Airbnb, Microsoft, Amazon, Blue Martini Software, and Silicon Graphics.

Ronny’s papers have over 53,000 citations.  He co-authored the book Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing (with Diane Tang and Ya Xu) which is in the top-10 data mining books on Amazon.  He is the most viewed writer on Quora’s A/B testing and he received the Individual Lifetime Achievement Award for Experimentation Culture in September 2020.

Ronny also holds a PhD in Machine Learning from Stanford University where he led the MLC++ project, the Machine Learning library in C++ used at Silicon Graphics and at Blue Martini Software

About Ronny's

Live Cohort

A/B tests, also called online controlled experiments, are used heavily at companies like Airbnb, Amazon, Booking.com, eBay, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, Yahoo!/ Oath, and Yandex. These companies run thousands to tens of thousands of experiments every year, sometimes involving millions of users and testing everything, including changes to the user interface (UI), relevance algorithms (search, ads, personalization, recommendations, and so on), latency/ performance, content management systems, customer support systems, and more. Experiments are run on multiple digital channels: websites, desktop applications, mobile applications, and e-mail.

The theory of controlled experiments dates back to Sir Ronald A. Fisher in the 1920s, but running experiments at scale is not simply a theoretical challenge, but mostly a cultural challenge.  To paraphrase Helmuth von Moltke, ideas rarely survive contact with customers.  Objective evaluations of ideas in A/B tests show that most ideas are weaker than expected.  Organizations must change incentives and be great at failing fast and pivoting in the face of data.  We will look at real examples of cultural changes.

Setting a strategy with integrity implies tying it to an Overall Evaluation Criterion (OEC).  We will discuss good characteristics of metrics, and in particular focus on the OEC.

A/B tests are commonly associated with evaluating new ideas over a week or two, but when implemented with near-real-time metrics, they provide a safety net for safe deployments, able to detect and abort deployments.  We will discuss feature flags and why controlled experiments are so good at helping avoid severe outages.

6-Dec, 4PM (PDT)

Session 1 - Introduction

This session will be an introduction to both your instructor and what controlled experiments and A/B testing actually are. What are some great examples of surprising results? What are the key tenets that make A/B tests a good fit?

We will also look at the hierarchy of evidence, using A/B tests for safe deployments (i.e. test everything and ramping up) as well as go through an end-to-end example from Ronny’s experience.

7-Dec, 4PM (PDT)

Session 2 - Metrics, the OEC, Interpreting Results and Trust

This session will go over the Metrix taxonomy, how to evaluate metrics and the Overall Evaluation Criterion (OEC) which looks at the quantitative measure of an experiment’s objective.

We will also look at how to interpret the results of experiments - What is a p-value?  What is peeking? What are threats? Finally, we will also go into Tyman’s law and experimentation trustworthiness

Session 3 - Ideation funnel, Machine Learning, Culture and Ethics

This session covers how the ideation funnels works with A/B testing at scale and how you manage your backlog. What is  institutional memory and the Expected Value of Information (EVI)?

Furthermore, we dive into Machine Learning and AI and how A/B tests can hugely benefit evolving models. How do you do it correctly with tight triggering?

Finally, we will look at the evolution of data driven cultures wihtin businesses and what the ethics of experimentation are

9-Dec, 4PM (PDT)

Session 4 - Leakage & Interference, Complementary Techniques and Observational Causal Studies

This session we will examine when assumptions are violated - i.e. leakage and solutions. We will also explore when A/B testing cannot be used and what are complementary techniques.

Additionally, we will look at different causal studies like surveys, interrupted time series, interleaved experiments (and more), looking at their pitfalls and examples of studies gone wrong.

13-Dec, 4PM (PDT)

Session 5 - Maturity Model, Experimentation Platform and Build vs Buy

In our final session we will look a the journey businesses go through to become data driven - from crawling to flying!

We will also explore how businesses can lower marginal cost of experiments with a platform and  the pros and cons of building vs buying. We will be joined by guest speakers from various experimentation companies who will also provide their perspectives.

15-Dec, 4PM (PDT)