Designed for executives looking to make their organizations
Data Driven
Product Leaders
Program managers and product managers that are focused on metrics like growth and revenue, and prioritization decisions
Data Science Managers and Data Scientists
Analysts who help map strategic decisions to actionable experimental designs and then interpret the results in a trustworthy manner



Engineering Leaders
Engineering managers, directors, VPs, and CTOs who want to make their organizations data-driven with metrics and A/B tests



Meet your
Instructor
.png)
Ronny Kohavi




Few people have accumulated as much experience as Ronny Kohavi when it comes to experimentation. His work at tech giants such as Amazon, Microsoft and Airbnb — just to name a few — has laid the foundation of modern online experimentation. He is a pioneer and leader of the data mining and machine learning community and wants to share his learnings with you.
Over the last 20 years, Ronny has served in numerous roles at Airbnb, Microsoft, Amazon, Blue Martini Software, and Silicon Graphics.
Ronny’s papers have over 53,000 citations. He co-authored the book Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing (with Diane Tang and Ya Xu) which is in the top-10 data mining books on Amazon. He is the most viewed writer on Quora’s A/B testing and he received the Individual Lifetime Achievement Award for Experimentation Culture in September 2020.
Ronny also holds a PhD in Machine Learning from Stanford University where he led the MLC++ project, the Machine Learning library in C++ used at Silicon Graphics and at Blue Martini Software
What you get out of
This Course



Join a diverse and experienced alumni community from
Leading Companies
.png)

About Ronny's
Live Cohort
A/B tests, also called online controlled experiments, are used heavily at companies like Airbnb, Amazon, Booking.com, eBay, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, Yahoo!/ Oath, and Yandex. These companies run thousands to tens of thousands of experiments every year, sometimes involving millions of users and testing everything, including changes to the user interface (UI), relevance algorithms (search, ads, personalization, recommendations, and so on), latency/ performance, content management systems, customer support systems, and more. Experiments are run on multiple digital channels: websites, desktop applications, mobile applications, and e-mail.
The theory of controlled experiments dates back to Sir Ronald A. Fisher in the 1920s, but running experiments at scale is not simply a theoretical challenge, but mostly a cultural challenge. To paraphrase Helmuth von Moltke, ideas rarely survive contact with customers. Objective evaluations of ideas in A/B tests show that most ideas are weaker than expected. Organizations must change incentives and be great at failing fast and pivoting in the face of data. We will look at real examples of cultural changes.
Setting a strategy with integrity implies tying it to an Overall Evaluation Criterion (OEC). We will discuss good characteristics of metrics, and in particular focus on the OEC.
A/B tests are commonly associated with evaluating new ideas over a week or two, but when implemented with near-real-time metrics, they provide a safety net for safe deployments, able to detect and abort deployments. We will discuss feature flags and why controlled experiments are so good at helping avoid severe outages.
.png)
Introduction
.png)
Introduction to controlled experiments, or A/B testing
.png)
Interesting experiments – you’re the decision maker
.png)
Organizational tenets
.png)
Advantages and limitations of controlled experiments
Metrics, the OEC, Interpreting Results and Trust
.png)
End-to-end example: Running and Analyzing an Experiment
.png)
Metrics and the OEC (large section with break)
.png)
P-values and statistical power (short)
.png)
.png)
Ideation funnel, Machine Learning, Culture and Ethics
.png)
Interesting examples
.png)
Experiment example: speed matters
.png)
Twyman’s Law and Trustworthy Experimentation
.png)
Cultural challenge, Institutional memory, maturity model, prioritization/EVI, ethics
Leakage & Interference, Complementary Techniques and Observational Causal Studies
.png)
AI/Machine Learning model, and triggering
.png)
Observational causal studies
.png)
Pitfalls in observational studies
.png)
Leakage and interference
.png)
.png)
Maturity Model, Experimentation Platform and Build vs Buy
.png)
Build vs. Buy
.png)
Three A/B vendor presentations with Q&A
.png)
Experimentation Platform
.png)
Challenges
.png)
Requested Topics
Still have questions?
We’re here to help!
.png)
Do I have to attend all of the sessions live in real-time?

You don’t! We record every live session in the cohort and make each recording and the session slides available on our portal for you to access anytime.
.png)
Will I receive a certificate upon completion?

Each learner receives a certificate of completion, which is sent to you upon completion of the cohort (along with access to our Alumni portal!). Additionally, ScholarSite is listed as a school on LinkedIn so you can display your certificate in the Education section of your profile.!
.png)
Is there homework?

Throughout the cohort, there may be take-home questions that pertain to subsequent sessions. These are optional, but allow you to engage more with the instructor and other cohort members!
.png)
Can I get the course fee reimbursed by my company?

While we cannot guarantee that your company will cover the cost of the cohort, we are accredited by the Continuing Professional Development (CPD) Standards Office, meaning many of our learners are able to expense the course via their company or team’s L&D budget. We even provide an email template you can use to request approval.
.png)
I have more questions, how can I get in touch?

Please reach out to us via our Contact Form with any questions. We’re here to help!