This article <i>really</i> needs a "(2018)" marker.<p>This article predates GPT-3 and GPT-2, it even predates the essay "The Bitter Lesson" <<a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html" rel="nofollow">http://www.incompleteideas.net/IncIdeas/BitterLesson.html</a>>.<p>It might be true long-term, but it's certainly not written with the current advances in mind.
This article feels like it came from some alternate universe where the history of AI is exactly the opposite of where it is in ours, and specifically where “The Bitter Lesson” [0] is not true. In our world, AI <i>was</i> stuck in a rut for decades because people kept trying to do exactly what this article suggests: incorporate modeling and how people <i>think</i> consciousness works. And then it broke out of that rut because everyone went fuck it and just threw huge data at the problem and told the machines to just pick the likeliest next token based on their training data.<p>All in all this reads like someone who is deeply stuck in their philosophy department and hasn’t seen anything that has happened in AI in the last fifteen years. The symbolic AI camp lost as badly as the Axis powers and this guy is like one of those Japanese holdouts who didn’t get the memo.<p>[0]: <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html" rel="nofollow">http://www.incompleteideas.net/IncIdeas/BitterLesson.html</a>
Even us so called intelligent beings only have correlation.<p>The closest thing we have to “causation” is the scientific method and even that is only one counter example from disproving entire theories.<p>So why we need AI to understand causation when even we don’t have it.<p>It’s correlation all the way down. We should strive for truth but know that we will never achieve any
Fully agree with this article. Our definition for intelligence: "Intelligence is conceptual awareness capable of real-time causal understanding and prediction about space-time."[1]<p>[1] <a href="https://graphmetrix.com/trinpod" rel="nofollow">https://graphmetrix.com/trinpod</a>
>Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X.<p>X⟹Y<p>This seems like sophistry to bring up the fact that algebra is symmetric and totally ignore the exist of the above.
I don’t understand why very large neural networks can’t model causality in principal.<p>I also don’t understand the argument that even if NNs can model causality in principal they are unlikely to do so in practice (things I’ve heard: spurious correlations are easier to learn, the learning space is too large to expect causality to be learned from data, etc).<p>I also don’t understand why people aren’t convinced that LLM can demonstrate causal understanding in setting where they have been used for things like control like decision transformers… like what else is expected here?<p>Please enlighten me
We don't need intelligent machines, for the most part. We just need machines that are less shitty. Making an AI seems like a lazy person who doesn't want to work harder to make a less shitty machine.
Try explaining to a cause and effect machine why lots of folks have been let go from tech companies, while the management who misjudged the market get kept on and still get their bonuses.
> "You live on an island called causality," the voice says. "A small place, where effect follows cause like a train on rails. Walking forward, step by step, in the footprints of a god on a beach."<p>- Hannu Rajaniemi, "The Causal Angel" (2014)
[2018]<p>His views about AI are largely disproved now. GPT style systems turned out to be perfectly capable of reasoning about causation, as they can reason about any other relation. Despite working entirely in the established machine learning paradigm.<p>For many years, Pearl was considered the top intellectual critic of machine learning. His point was this: Machine learning is, at its core, just correlational. But true AI would also need to reason about causation. This ability would have to be provided by systems which work entirely differently; by using some form of the theory of causal networks which he co-invented.<p>Now it turns out that causal reasoning is not a major difficulty for classical machine learning, and that causal graphs are likely as useless for AI as formal logic turned out to be.<p>Well, at least causal networks are useful for statistics, the type of explicit inference human scientists do.
God, I hate these titles. The same science news business site published this before <a href="https://www.quantamagazine.org/videos/qa-melanie-mitchell-video" rel="nofollow">https://www.quantamagazine.org/videos/qa-melanie-mitchell-vi...</a><p>I have no problem if they say x thinks y. But to put it as if it is a fact like "To Build Truly Intelligent Machines, Teach Them Cause and Effect" and "The Missing Link in Artificial Intelligence" to get more hits is disgusting.