Accelerating innovation with
a/b testing

Your chance to engage live with a world leading experimentation expert, Ronny Kohavi, and a cohort of driven industry professionals

5 x 2 hr sessions

8am PST; Jan 31 and February 1, 3, 7 and 10

$1,400 per seat (expense through L&D)

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Your referral makes you eligible for a $50 Amazon Gift Card!

Designed for executives looking to make their organizations

Data Driven

Product Executives

Product executives who want to make the organization more data-driven, where the product has at least tens of thousands of users/month.

Growth Executives

Executives and senior managers focused on growth (e.g., increasing monthly active users and their usage, reducing attrition, improving monetization)

The motivation and basics of A/B testing (e.g., causality, surprising examples, metrics, interpreting results, trust and pitfalls, Twyman’s law, A/A tests)
Cultural challenges, humbling results (e.g., failing often, pivoting, iterating), experimentation platform, institutional memory and meta-analysis, ethics
Hierarchy of evidence, Expected Value of Information (EVI), complementary techniques, risks in observational causal studies

Engineering Leaders

CTOs and VPs of Engineering who are using or planning to increase the use of Machine Learning

Safe deployments
Triggering, especially in evaluating machine learning models
The benefits of agile product development

Meet your

Instructor Dr. Ronny Kohavi

Few people have accumulated as much experience as Ronny Kohavi when it comes to experimentation. His work at tech giants such as Amazon, Microsoft and Airbnb — just to name a few — has laid the foundation of modern online experimentation. He is a pioneer and leader of the data mining and machine learning community and wants to share his learnings with you.

Over the last 20 years, Ronny has served in numerous roles at Airbnb, Microsoft, Amazon, Blue Martini Software, and Silicon Graphics.

Ronny’s papers have over 53,000 citations.  He co-authored the book Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing (with Diane Tang and Ya Xu) which is in the top-10 data mining books on Amazon.  He is the most viewed writer on Quora’s A/B testing and he received the Individual Lifetime Achievement Award for Experimentation Culture in September 2020.

Ronny also holds a PhD in Machine Learning from Stanford University where he led the MLC++ project, the Machine Learning library in C++ used at Silicon Graphics and at Blue Martini Software

JOIN US

Be part of the future of online learning

About Ronny's

Live Cohort

A/B tests, also called online controlled experiments, are used heavily at companies like Airbnb, Amazon, Booking.com, eBay, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, Yahoo!/ Oath, and Yandex. These companies run thousands to tens of thousands of experiments every year, sometimes involving millions of users and testing everything, including changes to the user interface (UI), relevance algorithms (search, ads, personalization, recommendations, and so on), latency/ performance, content management systems, customer support systems, and more. Experiments are run on multiple digital channels: websites, desktop applications, mobile applications, and e-mail.

The theory of controlled experiments dates back to Sir Ronald A. Fisher in the 1920s, but running experiments at scale is not simply a theoretical challenge, but mostly a cultural challenge.  To paraphrase Helmuth von Moltke, ideas rarely survive contact with customers.  Objective evaluations of ideas in A/B tests show that most ideas are weaker than expected.  Organizations must change incentives and be great at failing fast and pivoting in the face of data.  We will look at real examples of cultural changes.

Setting a strategy with integrity implies tying it to an Overall Evaluation Criterion (OEC).  We will discuss good characteristics of metrics, and in particular focus on the OEC.

A/B tests are commonly associated with evaluating new ideas over a week or two, but when implemented with near-real-time metrics, they provide a safety net for safe deployments, able to detect and abort deployments.  We will discuss feature flags and why controlled experiments are so good at helping avoid severe outages.

6-Dec, 4PM (PDT)

Session 1 - Introduction

• Introduction to controlled experiments, or A/B testing
• Interesting experiments – you’re the decision maker
• Organizational tenets
• Advantages and limitations of controlled experiments

7-Dec, 4PM (PDT)

Session 2 - Metrics, the OEC, Interpreting Results and Trust

• End-to-end example: Running and Analyzing an Experiment
• Metrics and the OEC (large section with break)
• P-values and statistical power (short)

Session 3 - Ideation funnel, Machine Learning, Culture and Ethics

• Interesting examples
• Experiment example: speed matters
• Twyman’s Law and Trustworthy Experimentation
• Cultural challenge, Institutional memory, maturity model, prioritization/EVI, ethics

9-Dec, 4PM (PDT)

Session 4 - Leakage & Interference, Complementary Techniques and Observational Causal Studies

• AI/Machine Learning model, and triggering
• Observational causal studies
• Pitfalls in observational studies
• Leakage and interference

13-Dec, 4PM (PDT)

Session 5 - Maturity Model, Experimentation Platform and Build vs Buy

• Build vs. Buy
• Three A/B vendor presentations with Q&A
• Experimentation Platform
• Challenges
• Requested Topics

15-Dec, 4PM (PDT)
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

You’ll be joining a world leading expert along with a diverse cohort from companies including