Practical A/B Testing
A/B testing is a powerful technique that allows businesses to understand the causal impact of changes made to their products on their users and overall business success. Running successful A/B tests requires a proper setup, execution, and the ability to thoughtfully analyze results. This course will equip learners to master the analysis and interpretation skills for practical A/B testing, even in the most complicated of scenarios.
Course taught by expert instructors

Stephanie Pancoast
Senior Data Science and Analytics Manager
Stephanie is a Senior Manager of Data Science and Analytics. She recently led the improvement of A/B testing at Strava through platform enhancements, education, and process implementation. Prior to Strava, Stephanie worked at Airbnb as a Senior Data Scientist, assisting with over 100 A/B tests and co-developing an internal course on common A/B test interpretation mistakes. Stephanie also received a Ph.D. in Electrical Engineering from Stanford University, specializing in Applied Machine Learning.
The course
Learn and apply skills with real-world projects.
Data Scientists working on experimentation platforms and strategies for their organization
Product or Data Analysts performing A/B tests on their products
ML Engineers involved in A/B testing their deployed models
Foundational knowledge of Python programming: Variables, Functions, Lists, Loops
Try these prep courses first
- Learn
- Why A/B test?
- What exactly is A/B testing?
- Building blocks to interpretation: P-values, Confidence Intervals
- Determining run-time (power analysis + assignment rate)
- Picking the right metrics
Project- Get your hypothetical product team up and running
- What metric should they focus on? What should the guardrail(s) be?
- Write the brief (hypothesis, run time) for 2 tests
- Analyze and summarize the findings of the 2 tests
- Learn
- Different types of A/B tests: decision, measurement, defensive
- Outliers: how to handle them
- Multiple hypothesis testing, including segmentation
Project- Analyze 2 test results that just came in and prepare the brief (experiment design) for 3 others
- No detectable effect - but segmenting the results shows there is
- P-value is close to 0.05. Also getting used to the idea that more of than not, there is no detectable impact
- More practice with hypothesis generation/ power analysis
- Learn
- The importance of setup: Imbalance, dilution
- Beware of early results - what can cause negative and early results to be different than the actual impact and how to handle it.
Project- The tests you designed last week just started and early results show one test very negative and another is positive - plan how you'd interact and communicate with product teams
- Dig into those two tests and figure out what’s going on and make a recommendation
- Analyze the results of that + the non-issue one and summarizing the findings.
- Learn
- Concurrent tests and how to approach them
Project- Bring your own test results to analyze OR
- Analyze two tests that interact, one has has an outlier.
Real-world projects
Work on projects that bring your learning to life.
Made to be directly applicable in your work.
Live access to experts
Sessions and Q&As with our expert instructors, along with real-world projects.
Network & community
Core reviews a study groups. Share experiences and learn alongside a global network of professionals.
Support & accountability
We have a system in place to make sure you complete the course, and to help nudge you along the way.
Get reimbursed by your company
More than half of learners get their Courses and Memberships reimbursed by their company.
Hundreds of companies have dedicated L&D and education budgets that have covered the costs.