Cool stuff, thanks for sharing publicly.<p>Did you all consider using Double Selection [1] or Double Machine Learning [2]?<p>The reason I ask is that your approach is very reminiscent of a Lasso style regression where you first run lasso for feature selection then re-run a normal OLS with only those controls included (Post-Lasso). This is somewhat problematic because Lasso has a tendency to drop too many controls if they are too correlated with one another, introducing omitted variable bias. Compounding the issue, some of those variables may be correlated with the treatment variable, which increases the chance they will be dropped.<p>The solution proposed is to run two separates Lasso regressions, one with the original dependent variable and another with the treatment variable as the dependent variable, recovering two sets of potential controls, and then using the union of those sets as the final set of controls. This is explained in simple language at [3].<p>Now, you all are using PCA, not Lasso, so I don't know if these concerns apply or not. My sense is that you still may be omitting variables if the right variables are not included at the start, which is not a problem that any particular methodology can completely avoid. Would love to hear your thoughts.<p>Also, you don't show any examples or performance testing of your method. An example would be demonstrating in a situation where you "know" (via A/B test perhaps) what the "true" causal effect is that your method is able to recover a similar point estimate. As presented, how do we / you know that this is generating reasonable results?<p>[1] <a href="http://home.uchicago.edu/ourminsky/Variable_Selection.pdf" rel="nofollow">http://home.uchicago.edu/ourminsky/Variable_Selection.pdf</a>
[2] <a href="https://arxiv.org/abs/1608.00060" rel="nofollow">https://arxiv.org/abs/1608.00060</a>
[3] <a href="https://medium.com/teconomics-blog/using-ml-to-resolve-experiments-faster-bd8053ff602e" rel="nofollow">https://medium.com/teconomics-blog/using-ml-to-resolve-exper...</a>