TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

ML Beyond Curve Fitting: An Intro to Causal Inference and Do-Calculus

184 点作者 dil8将近 7 年前

9 条评论

smallnamespace将近 7 年前
Something to note about this formulation is the explicit assumption that in p(y|do(x)), the &#x27;do&#x27; operation is supposed to be completely independent of prior observed variables, e.g. the doers are &#x27;unmoved movers&#x27; [1].<p>That fits the model where you randomly &#x27;do&#x27; one thing or another (e.g. blinded testing); however this is <i>not</i> the same thing as p(y|do&#x27;(x)), where do&#x27; is your empirical observation of when you yourself have set X=x in a more natural context.<p>E.g. let&#x27;s say you will always turn on the heat when it&#x27;s cold outside. P(cold outside | do(turn on heat)) = P(cold outside), because turning on the heat does not affect the temperature outdoors.<p>However, P(cold outside | do&#x27;(turned on heat)) &gt; P(cold outside), because empirically, you actually only <i>choose</i> to turn on the heat when it&#x27;s cold outdoors.<p>These two are also different from P(cold outside | heat was turned on) (since <i>someone else</i> might have access to the thermostat).<p>In reality our choices and actions are also products of the initial states (including our own beliefs, and our own knowledge of what would happen if we did x). Our actions both move the world, but we are also moved by the world.<p>Does do-calculus have a careful treatment of &#x27;mixed&#x27; scenarios where actions are both causes <i>and</i> effects of other causes?<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unmoved_mover" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unmoved_mover</a>
评论 #17154941 未加载
Darmani将近 7 年前
For those trying to understand the difference between action and observation, here&#x27;s a good example from a friend:<p>Every bug you fix in your code increases your chances of shipping on time, but provides evidence that you won&#x27;t.
phkahler将近 7 年前
I really enjoyed the humility the author had in the introduction to this piece. He paused and took a hard look at what seemed to be harsh or arrogant criticism of his field and found insight.
评论 #17159735 未加载
thadk将近 7 年前
Here is a paper explaining the essentials of how 45+ years of Causal Inference applies to ML: <a href="http:&#x2F;&#x2F;www.nber.org&#x2F;chapters&#x2F;c14009.pdf" rel="nofollow">http:&#x2F;&#x2F;www.nber.org&#x2F;chapters&#x2F;c14009.pdf</a><p>In this podcast by the same author, it explains the potential of sharing lessons from both worlds, if you&#x27;re not in the mood for an academic paper: <a href="http:&#x2F;&#x2F;www.econtalk.org&#x2F;archives&#x2F;2016&#x2F;09&#x2F;susan_athey_on.html" rel="nofollow">http:&#x2F;&#x2F;www.econtalk.org&#x2F;archives&#x2F;2016&#x2F;09&#x2F;susan_athey_on.html</a>
评论 #17162652 未加载
gowld将近 7 年前
How does someone <i>use</i> do-calculus? It&#x27;s a nice mathematization of Goodhart&#x27;s law, <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Goodhart%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Goodhart%27s_law</a><p>but how would help an algorithm make better predictions?<p>Sure, the reason a person turns on the heat affects our belief in the outside weather (were they feeling cold, or were they just trolling?), but how do you <i>know</i> the reason a person turned on the heat, and couldn&#x27;t you learn which reason are predictive by measuring correlations with other observables? If you <i>know</i> the reason directly (&quot;I&#x27;m just playing with the dial because I&#x27;m 4 years old&quot;) that&#x27;s a data point you could throw into your ML model <i>without</i> explicitly knowing it&#x27;s a <i>reason</i>.
评论 #17159488 未加载
mlthoughts2018将近 7 年前
I am interested in a companion phenomenon with the recent interest in causal models in machine learning. Namely, the fact that at least in computer vision, it is not new at all and has been an important idea for at least many decades.<p>One of the original sources that took this approach is &quot;The Ecological Approach to Visual Perception&quot; (1979) [0], by James Gibson, discussed at length the idea of &quot;affordances&quot; of an algorithmic model, similar in some respects to topics in reinforcement learning as well. Affordances represented the information about outcomes you gained by varying your degrees of observational freedom (i.e. you learn how to generalize beyond occluded objects by moving your head a little to the left or right and seeing how the visual input varies. This lets you get food, or hide from a predator that&#x27;s partially blocked by a tree, etc., so over time generalizing past occlusions become better and better -- this is much more interesting than a naive approach, like using data augmentation to augment a labeled data set with synthetically occluded variations, for example as is often done to improve rotational invariance).<p>Then this idea was extended with a lot of formality in the mid-to-late 00&#x27;s by Stefano Soatto in his papers on &quot;Actionable Information&quot; [1].<p>I wish more effort had been made by e.g. Pearl to look into this and unify his approach with what had already been thought of, especially because it turns me off a lot when someone tries to create a &quot;whole new paradigm&quot; and it starts to feel like they want to generate sexy marketing hype about it, rather than to say hey, this is an extension or connection or alternative of this older idea <i>already in the topic of machine learning</i> rather than appearing like one is saying, &quot;Us over hear in causal inference world already know so much more about what to do ... so now let&#x27;s apply it to your domain where you never thought of this&quot;. Pearl has a history of doing this stuff too, like with his previous debates with Gelman about Bayesian models. It almost feels to me like he is shopping around for some sexy application area where his one-upsmanship approach will catch on too give him a chance at the hype gravy train or something.<p>[0]: &lt; <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;James_J._Gibson#Major_works" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;James_J._Gibson#Major_works</a> &gt;<p>[1]: &lt; <a href="http:&#x2F;&#x2F;www.vision.cs.ucla.edu&#x2F;papers&#x2F;soatto09.pdf" rel="nofollow">http:&#x2F;&#x2F;www.vision.cs.ucla.edu&#x2F;papers&#x2F;soatto09.pdf</a> &gt;
评论 #17156094 未加载
评论 #17155954 未加载
评论 #17155750 未加载
carapace将近 7 年前
Worth mentioning, perhaps, that Cybernetics originated from the study of &quot;circular loops of causality&quot;, systems where e.g. A causes B, B causes C, and in turn C causes A, etc...
thanatropism将近 7 年前
This is really sexy.
offpolicy将近 7 年前
Nothing to see here. The do-calculus is just fancy notation for what reinforcement learning is already doing: trying different possible actions and trying to maximize reward. If you know possible actions in advance, this is basically minimizing regret of wrong policy actions.
评论 #17154984 未加载