TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Causal Revolutionary – Interview with Judea Pearl

78 点作者 onuralp超过 6 年前

7 条评论

celrod超过 6 年前
For those interested in Causal Inference, here is Judea Pearl&#x27;s fairly accessible intro &quot;An Introduction to Causal Inference&quot;: <a href="http:&#x2F;&#x2F;ftp.cs.ucla.edu&#x2F;pub&#x2F;stat_ser&#x2F;r354-corrected-reprint.pdf" rel="nofollow">http:&#x2F;&#x2F;ftp.cs.ucla.edu&#x2F;pub&#x2F;stat_ser&#x2F;r354-corrected-reprint.p...</a>
评论 #18000345 未加载
mathgenius超过 6 年前
&gt; The equations of physics are algebraic and symmetrical, whereas causal relationships are directional.<p>I don&#x27;t agree with this. Quantum measurements are projective and they are very much one way. People seem to want to dismiss this as being just &quot;epistemic&quot; like the way entropy (thermodynamics) is one-way, but not fundamentally one way. Entropy increases only because we can&#x27;t see all the details. Quantum measurements are not like that.
评论 #17998754 未加载
评论 #17999053 未加载
评论 #18000161 未加载
评论 #17999358 未加载
评论 #17998746 未加载
thallukrish超过 6 年前
This looks very interesting. I always felt most of ML is about one aspect of intelligence - prediction from existing data. While this may work for specific tasks like driving a car or analysing a scanned image, the human intelligence has always been about the Why? on anything. Answering the Why? on a pile of data by connecting the dots using a causal model is also prediction in a sense, but a more generic one than a prediction on a bunch of specific classes or outcomes. For example, a self-driving car algorithm trained on a pre-classified data to detect obstacles versus a algorithm that can answer Why something is a obstacle is vastly different and is much more effective I guess.
YeGoblynQueenne超过 6 年前
&gt;&gt; JP: Correct. Formally, Bayesian networks are just efficient evidence-to-hypothesis inference machines. However, in retrospect, their success emanated from their ability to “secretly” represent causal knowledge. In other words, they were almost always constructed with their arrows pointing from causes to effect, thus achieving modularity. It is only due to our current understanding of causality that we can reflect back and speculate on why they were successful; we did not know it then.<p>Hang on a sec. If Bayesian networks are perfectly capable of representing causality relations, and in fact they&#x27;ve been doing just that all along (albeit &quot;secretely&quot;) then why the hell do we need a different formalism to represent causality?<p>To give an analogy - if we can represent context-free langugaes with regular automata, then what&#x27;s the point of context-free languages? Instead, we classify languages that can be represented by both regular automata and pushdown automata as regular, and reserve the context-free designation for languages that <i>cannot</i> be represented by regular (or finite) automata.<p>In the same way, if causality relations can be represented by Bayesian networks, then higher-order representations are not really needed, or must be reserved for some object that Bayesian networks can&#x27;t represent.<p>In any case, this is just a huge piece of ret-con. Bayesian networks always represented causality relations, only they did so &quot;secretly&quot;! That&#x27;s up there with the original Klingon&#x27;s flat heads being the result of a virus infection; or how Jean Grey didn&#x27;t really die and it was the Phoenix Force that had taken her form.
YeGoblynQueenne超过 6 年前
&gt;&gt; The astonishing success of big-data and machine learning reflects our under-estimating how much can be achieved by the low hanging fruits of model-free curve-fitting. But when we look at the limitations unveiled by the calculus of causation we understand that human-level AI requires two more layers: intervention and counterfactuals.<p>Yes, well, the problem with that is that the vast majority of researchers in AI know very well that human-level AI is many, many years away still. Whereas those &quot;low-hanging fruit&quot;? They&#x27;re just hanging there, ripe for the picking and large companies are very eager to throw a shitload of money at people who can pick them, <i>right. now</i>.<p>And- let&#x27;s be fair. Anyone who knows how to do this &quot;model-free curve-fitting&quot; that the good professor so despises has a brilliant career for upwards of 30 years laid out for them- and those are 30 years in which they won&#x27;t have to think about causality and Judea Pearl <i>even once</i>.
woodandsteel超过 6 年前
Very interesting. As someone with a background in philosophy, I am wondering if causality has been excluded because of mistaken metaphysical and epistemological assumptions when the sciences were originally developed.
mark_l_watson超过 6 年前
Good interview- I also recommend his very latest book: very approachable material.