The Importance of Effect Size in A/B Testing
Read time: 2 minutes
— A/B TestingIn the world of data analysis and A/B testing, understanding the concept of effect size is critical. While statistical significance plays a key role in interpreting test results, it doesn’t provide a complete picture. Effect size, on the other hand, offers valuable insight into the practical significance of our findings.
What is Effect Size?
Simply put, effect size quantifies the difference between two groups. In the context of A/B testing, it could be the difference in a chosen metric (like click-through rates) between the control and experiment groups.
The beauty of effect size is that it’s not influenced by sample size. This makes it a reliable measure of the magnitude of an observed effect, something that statistical significance (p-value) alone may not communicate.
Real-Life Examples
Let’s illustrate this with some examples. Consider these two A/B test outcomes:
- 7.85% growth with confidence interval [0.2, 15.5]
- 6.5% growth with confidence interval [6.4, 6.6]
At first glance, outcome 1 seems more desirable because it shows higher growth. However, outcome 2 may actually be better in practical terms because its confidence interval is tighter. This implies that the observed growth is more likely to be ‘real’ or ’true’.
Now consider these two scenarios:
- 1.03% increase with confidence interval [0.42, 1.65]
- -1.03% decrease with confidence interval [-1.65, -0.42]
Here, the absolute percentage change is the same, but it’s not straightforward to tell which is more reliable.
This is where calculating effect size comes in handy. It helps us quantify how ‘real’ the change is.
Here’s the formula we’re using for effect size:
effect size = (experiment metric value - control metric value)% / confidence interval half-width
Applying this formula, consider the following cases:
- -1.02% with CI [-1.68, -0.42], Effect Size (ES)=-1.68
- -1.02% with CI [-2.03, -0.05], Effect Size (ES)=-1.05
- 1.02% with CI [-1.02, 3.1], Effect Size (ES)=0.51
- 1.02% with CI [0.05, 2.03], Effect Size (ES)=1.05
- 1.02% with CI [0.42, 1.65], Effect Size (ES)=1.68
- 1.02% with CI [0.82, 1.25], Effect Size (ES)=4.9
With effect size calculated, it’s clear that the last scenario has the highest likelihood that the observed growth is authentic.
Conclusion
Effect size is a powerful tool in the data analyst’s arsenal. It allows us to measure the size of an effect in a way that is not influenced by sample size, providing a more consistent and reliable measure than statistical significance alone. When reporting results from A/B tests, consider reporting both the statistical significance and the effect size for a more comprehensive understanding of your results.