This article is Part 1 of n (depending on how much I end up rambling on) in a series of articles about using quasi-experiments for causal inference. Briefly, Part 1 will explain the whys and hows of quasi-experiments, as well as the nuances involved when applying approaches like PSM. In Part 2, I will talk more about the limitations of quasi-experiments and what you should be cautious about when making decisions based on them. I will also propose a framework for heterogenous impact estimation that can help overcome extrapolation bias. In Part 3… I’m still not sure yet.
You may also have come across other articles explaining Quasi-Experiments, but I’m still going to try explaining it my way. Give it a read.
The cost of developing and launching products and features is ultimately justified by the positive impact on the consumer. It is thus unsurprising to hear product managers make all sorts of claims, such as “We are thrilled to announce that our latest feature launch has led to an impressive 12% increase in revenue!”
Sounds fabulous and, to be honest, most senior managers are more than happy to just accept such statements as the truth. My goal today is to convince you to take a deeper look into the methods of causal inference that (should) lie behind these claims. With a better grasp of causal inference, you will be better positioned to evaluate the impact that products and features bring for your users and your company.
Let us see what ChatGPT has to say about why Causal Inference is needed for products:
Causal inference empowers product teams with the ability to move beyond simply observing correlations in data and to establish a deeper understanding of the causal mechanisms driving product performance. (unsurprisingly already more succinctly expressed than anything I could produce)
One aspect really worth mentioning here is the idea of correlation and causality.
Correlation does not imply Causation. (don’t roll your eyes just yet)
Let’s be honest, so many of us say it and think we know what it means. When someone asks us…