TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

To build truly intelligent machines, teach them cause and effect

80 pointsby sonabinuabout 2 years ago

13 comments

dwheelerabout 2 years ago
This article <i>really</i> needs a &quot;(2018)&quot; marker.<p>This article predates GPT-3 and GPT-2, it even predates the essay &quot;The Bitter Lesson&quot; &lt;<a href="http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html" rel="nofollow">http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html</a>&gt;.<p>It might be true long-term, but it&#x27;s certainly not written with the current advances in mind.
评论 #34950569 未加载
评论 #34950543 未加载
Analemma_about 2 years ago
This article feels like it came from some alternate universe where the history of AI is exactly the opposite of where it is in ours, and specifically where “The Bitter Lesson” [0] is not true. In our world, AI <i>was</i> stuck in a rut for decades because people kept trying to do exactly what this article suggests: incorporate modeling and how people <i>think</i> consciousness works. And then it broke out of that rut because everyone went fuck it and just threw huge data at the problem and told the machines to just pick the likeliest next token based on their training data.<p>All in all this reads like someone who is deeply stuck in their philosophy department and hasn’t seen anything that has happened in AI in the last fifteen years. The symbolic AI camp lost as badly as the Axis powers and this guy is like one of those Japanese holdouts who didn’t get the memo.<p>[0]: <a href="http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html" rel="nofollow">http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html</a>
评论 #34950198 未加载
评论 #34950052 未加载
pyrolisticalabout 2 years ago
Even us so called intelligent beings only have correlation.<p>The closest thing we have to “causation” is the scientific method and even that is only one counter example from disproving entire theories.<p>So why we need AI to understand causation when even we don’t have it.<p>It’s correlation all the way down. We should strive for truth but know that we will never achieve any
评论 #34955407 未加载
评论 #34955503 未加载
评论 #34950853 未加载
评论 #34950883 未加载
gibsonf1about 2 years ago
Fully agree with this article. Our definition for intelligence: &quot;Intelligence is conceptual awareness capable of real-time causal understanding and prediction about space-time.&quot;[1]<p>[1] <a href="https:&#x2F;&#x2F;graphmetrix.com&#x2F;trinpod" rel="nofollow">https:&#x2F;&#x2F;graphmetrix.com&#x2F;trinpod</a>
评论 #34950183 未加载
评论 #34955239 未加载
评论 #34950003 未加载
评论 #34950259 未加载
LordDragonfangabout 2 years ago
&gt;Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X.<p>X⟹Y<p>This seems like sophistry to bring up the fact that algebra is symmetric and totally ignore the exist of the above.
评论 #35004023 未加载
darosatiabout 2 years ago
I don’t understand why very large neural networks can’t model causality in principal.<p>I also don’t understand the argument that even if NNs can model causality in principal they are unlikely to do so in practice (things I’ve heard: spurious correlations are easier to learn, the learning space is too large to expect causality to be learned from data, etc).<p>I also don’t understand why people aren’t convinced that LLM can demonstrate causal understanding in setting where they have been used for things like control like decision transformers… like what else is expected here?<p>Please enlighten me
评论 #34951418 未加载
评论 #34950535 未加载
throwawaaarrghabout 2 years ago
We don&#x27;t need intelligent machines, for the most part. We just need machines that are less shitty. Making an AI seems like a lazy person who doesn&#x27;t want to work harder to make a less shitty machine.
Simon_O_Rourkeabout 2 years ago
Try explaining to a cause and effect machine why lots of folks have been let go from tech companies, while the management who misjudged the market get kept on and still get their bonuses.
photochemsynabout 2 years ago
&gt; &quot;You live on an island called causality,&quot; the voice says. &quot;A small place, where effect follows cause like a train on rails. Walking forward, step by step, in the footprints of a god on a beach.&quot;<p>- Hannu Rajaniemi, &quot;The Causal Angel&quot; (2014)
cubefoxabout 2 years ago
[2018]<p>His views about AI are largely disproved now. GPT style systems turned out to be perfectly capable of reasoning about causation, as they can reason about any other relation. Despite working entirely in the established machine learning paradigm.<p>For many years, Pearl was considered the top intellectual critic of machine learning. His point was this: Machine learning is, at its core, just correlational. But true AI would also need to reason about causation. This ability would have to be provided by systems which work entirely differently; by using some form of the theory of causal networks which he co-invented.<p>Now it turns out that causal reasoning is not a major difficulty for classical machine learning, and that causal graphs are likely as useless for AI as formal logic turned out to be.<p>Well, at least causal networks are useful for statistics, the type of explicit inference human scientists do.
评论 #34950714 未加载
评论 #34950783 未加载
评论 #34950913 未加载
评论 #34950851 未加载
评论 #34950812 未加载
评论 #34950951 未加载
评论 #34950962 未加载
评论 #34951017 未加载
is_trueabout 2 years ago
Most politicians lack this too
hollerithabout 2 years ago
What could go wrong?
mrwnmonmabout 2 years ago
God, I hate these titles. The same science news business site published this before <a href="https:&#x2F;&#x2F;www.quantamagazine.org&#x2F;videos&#x2F;qa-melanie-mitchell-video" rel="nofollow">https:&#x2F;&#x2F;www.quantamagazine.org&#x2F;videos&#x2F;qa-melanie-mitchell-vi...</a><p>I have no problem if they say x thinks y. But to put it as if it is a fact like &quot;To Build Truly Intelligent Machines, Teach Them Cause and Effect&quot; and &quot;The Missing Link in Artificial Intelligence&quot; to get more hits is disgusting.
评论 #34950165 未加载