Scott here from ClearBrain - the ML engineer who built the underlying model behind our causal analytics platform.<p>We’re really excited to release this feature after months of R&D. Many of our customers want to understand the causal impact of their products, but are unable to iterate quickly enough running A/B tests. Rather than taking the easy path and serving correlation based insights, we took the harder approach of automating causal inference through what's known as an observational study, which can simulate A/B experiments on historical data and eliminate spurious effects. This involved a mix of linear regression, PCA, and large-scale custom Spark infra. Happy to share more about what we did behind the scenes!
Very exciting to see causal theory being productionized!<p>From the article, this seems like a normal regression to me. Would be interesting to know what makes it causal (or at least better) compared to an OLS. PCA has been used for a long time to select the features to use in regression. Would it be accurate to say that the innovation is on how the regression is calculated rather than the statistical methodology?<p>Either way, it would interesting to test this approach against an A/B test and check how much an observational study differs from the A/B estimates, and how sensitive is this approach to including (or not) a set of features. Also would be interesting to compare it to other quasi-experimental methodologies, such as propensity score matching.<p>Is there a more extended document explaining the approach?<p>Good luck!
I only skimmed it, so forgive me if I got this wrong. The causal model used here makes some incredibly strong (unlikely to be close enough to accurate) assumptions. Are these results valid if there are unobserved confounders or selection bias?
I have been involving in causal inference analysis since 2015. We use a mixed model of decision tree and fixed effect regressions. I read your paper and could not find a reference of why, while one cannot do AB test to verify the relationship but can use observational analysis to do it. Could you share a reference please?
Thank you for this insightful article!
Cool stuff, thanks for sharing publicly.<p>Did you all consider using Double Selection [1] or Double Machine Learning [2]?<p>The reason I ask is that your approach is very reminiscent of a Lasso style regression where you first run lasso for feature selection then re-run a normal OLS with only those controls included (Post-Lasso). This is somewhat problematic because Lasso has a tendency to drop too many controls if they are too correlated with one another, introducing omitted variable bias. Compounding the issue, some of those variables may be correlated with the treatment variable, which increases the chance they will be dropped.<p>The solution proposed is to run two separates Lasso regressions, one with the original dependent variable and another with the treatment variable as the dependent variable, recovering two sets of potential controls, and then using the union of those sets as the final set of controls. This is explained in simple language at [3].<p>Now, you all are using PCA, not Lasso, so I don't know if these concerns apply or not. My sense is that you still may be omitting variables if the right variables are not included at the start, which is not a problem that any particular methodology can completely avoid. Would love to hear your thoughts.<p>Also, you don't show any examples or performance testing of your method. An example would be demonstrating in a situation where you "know" (via A/B test perhaps) what the "true" causal effect is that your method is able to recover a similar point estimate. As presented, how do we / you know that this is generating reasonable results?<p>[1] <a href="http://home.uchicago.edu/ourminsky/Variable_Selection.pdf" rel="nofollow">http://home.uchicago.edu/ourminsky/Variable_Selection.pdf</a>
[2] <a href="https://arxiv.org/abs/1608.00060" rel="nofollow">https://arxiv.org/abs/1608.00060</a>
[3] <a href="https://medium.com/teconomics-blog/using-ml-to-resolve-experiments-faster-bd8053ff602e" rel="nofollow">https://medium.com/teconomics-blog/using-ml-to-resolve-exper...</a>
An analytics platform without a privacy policy? :(<p>404: <a href="https://www.clearbrain.com/privacy" rel="nofollow">https://www.clearbrain.com/privacy</a><p>404: <a href="https://www.clearbrain.com/terms" rel="nofollow">https://www.clearbrain.com/terms</a>