Four Common Mistakes When A/B Testing and How to Solve Them | by Terence Shin | Jun, 2023


Enhance Your A/B Testing Skills: Addressing Four Key Errors for Better Results

Photo by Oscar Ivan Esquivel Arteaga on Unsplash

A/B testing is like Jenga, a delicate balance of interconnected pieces that form the foundation of a successful experiment. Just like in the game of Jenga, where removing the wrong block can cause the entire tower to crumble, A/B testing relies on multiple components working together. Each piece represents a crucial element of the test, and if any of them fail, the integrity of the experiment can be compromised, leading to inaccurate results or missed opportunities.

And in my experiences, I’ve seen such great experiment ideas crumble because of very common mistakes that many data scientists commit, myself included! And so, I want to cover with you four of the most common mistakes when A/B testing (and how to solve them!).

If you’re not familiar with A/B testing and you’re interested in pursuing a career in data science, I strongly recommend you at least familiarize yourself with the concept.

You can check out my below if you’d like a primer on A/B testing:

With that said, let’s dive into it!

To recap, statistical power represents the probability of correctly detecting a true effect, or more accurately speaking, it is the conditional probability of rejecting the null hypothesis given that it is false. Statistical power is inversely related to the probability of committing a Type 2 error (false negative).

Generally, it’s common practice to set the power at 80% when conducting a study. Given its definition, this means that if you set the power at 80%, you would fail to reject the null hypothesis given that it is false 20% of the time. In simpler terms, if there were true effects in 100 conducted experiments, you would only detect 80 out of the 100.



Source link

Leave a Comment