None of Gary’s comments were original either. I don’t know what I’d call this, but I’ve seen similar behavior elsewhere. This weird “flag planting” behavior to try to get credit without doing any actual work, as well as disregarding all prior work. Normally the “predictions” are vague or could be applied to anything. It seems borderline like a mental illness of some sort, but I’m not a mental health professional.
Wow, Gary Marcus just Schmidhubered Yann LeCun.<p>The ironic thing of course is that Yann has not been at the forefront of AI for many many years (and Gary, of course, never has). Facebook's research has failed to rival Google Brain, DeepMind, OpenAI, and groups at top universities.<p>So to the extent that Yann is copying Gary's opinions, it's because they both converge at a point far behind the leaders in the field. Yann should be much more concerned than Gary about that.
Not taking any sides one way or the other regarding whatever debate exists between Yann and Gary. But for what it's worth, I'd just like to point out that this overall notion of "neural symbolic" integration is fairly old by this point in time. It's gone a little bit in and out of vogue (sort of like neural networks in general, but not to the same degree) over the years. Outside of Gary, the other "big name" I'd cite who has spoken about this topic is Ron Sun. See:<p>* <a href="https://books.google.com/books?id=n7_DgtoQYlAC&dq=Connectionist+Symbolic+Integration.+Lawrence+Erlbaum+Associates,+1997.&source=gbs_navlinks_s" rel="nofollow">https://books.google.com/books?id=n7_DgtoQYlAC&dq=Connection...</a><p>* <a href="https://link.springer.com/book/10.1007/10719871" rel="nofollow">https://link.springer.com/book/10.1007/10719871</a><p>* <a href="https://www.amazon.com/Integrating-Connectionism-Robust-Commonsense-Reasoning/dp/0471593249/" rel="nofollow">https://www.amazon.com/Integrating-Connectionism-Robust-Comm...</a><p>* <a href="https://sites.google.com/site/drronsun/reason" rel="nofollow">https://sites.google.com/site/drronsun/reason</a>
Gary Marcus' contribution to the field is to post the same rant about how it's not real intelligence, every 6 months. Why does he keep getting up voted?
Old enough to remember when Marcus was picking out-of-scope fights with parallel distributed processing models and scholars. On the one hand, he’s right, symbol manipulation is different in kind, not degree. On the other, we’ve known that since the dawn of neural networks. To claim credit for theoretical gaps that others try to fill in practice seems petty and myopic.
I expected this to be a smear / petty argument article. In fact, it's a concise, highly specific, quote by quote critique.<p>I don't have enough context to take a side, but this is not just a rant.<p>Beyond their interpersonal disagreements, I do wonder if LeCunn is seeing diminishing marginal returns to deep learning at FB...
This is fully pathetic. I expect poor quality from Marcus bit this really takes the cake.<p>>LeCun, 2022: Reinforcement learning will also never be enough for intelligence; Marcus, 2018: “ it is misleading to credit deep reinforcement learning with inducing concept[s] ”<p>> “I think AI systems need to be able to reason,"; Marcus 2018: “Problems that have less to do with categorization and more to do with commonsense reasoning essentially lie outside the scope of what deep learning is appropriate for, and so far as I can tell, deep learning has little to offer such problems.”<p>>LeCun, 2022: Today's AI approaches will never lead to true intelligence (reported in the headline, not a verbatim quote); Marcus, 2018: “deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”<p>These are LeCun's supposed great transgressions? Vague statements that happen to be vaguely similar to Marcus' vague statements?<p>Marcus also trots out random tweets to show how supported his position is and one mentions a Marcus paper with 800 citations as being "engaged in the literature". But a paper like Attention is all you need that currently has over 40,000 citations. THAT is a paper the community is engaged with. Not something with less than 1/50th the citations.<p>This is a joke...
Marcus' moaning gets old, especially when his criticism is so self-referential; he's hardly the only voice against AI hype, though no doubt he's one of the loudest.<p>However he does seem to have legitimate complaints about the echo chamber the big names seem to be operating in.
Is Marcus trying to create the impression that somehow he is a more impactful AI contributor than LeCun? It's going to be a tough sell because I know LeCun's name from his technical work whereas I know Marcus' name from him constantly moaning about LeCun on social media. In what _tangible_ ways did Marcus contribute?
These guys know better than to rev the tachometer up in the lay press talking about AGI and “achieve human level intelligence” and stuff. This fluff, unfortunately, sells and so when you’ve got an ego big enough to be talking this way in the first place I suppose you feel like you have to do it?<p>Machine learning researchers optimize “performance” on “tasks”, and while those terms are <i>still</i> tricky to quantify or even define in many cases, they’re a <i>damned sight</i> closer to rigorous, which is why people like Hassabis who get shit done actually talk about them in the lay press, when they deal with the press at all.<p>We can’t agree when an embryo becomes a fetus becomes a human with anything approaching consensus. We can’t agree which animals “feel pain” or are “self aware”. We can sort of agree how many sign language tokens silverbacks can remember and that dolphins exhibit social behavior.<p>Let’s keep it to “beats professionals at Go” or “scores such on a Q&A benchmark”, or “draws pictures that people care to publish”, something somehow tethered to reality.<p>I’ve said it before and I’ll say it again: lots of luck with <i>either</i> of the words “artificial” <i>or</i> “intelligent”, give me a break on both in the same clause.
So the idea is that statistical language modelling is not enough. You need a model based on logic too for "real" artificial intelligence. I wonder what the evidence for this claim is? Because the inferences and reasoning GPT3 is already capable of is incredible and beats most expert systems that I know of. And GPT4 is around the corner, Stable Diffusion was published like only a few months ago. I don't see why not more compute, more training data, and better network architectures couldn't lead to leaps and bounds of model improvements. At least for a few more years.
Yann LeCun’s Facebook post from a few days ago now makes more sense to me:<p><a href="https://www.facebook.com/722677142/posts/pfbid035FWSEPuz8YqeWKLb55K22bEozYnFLwx7FQFuJdA6uCQEthnh8b84ZxWbcMRQAyfGl" rel="nofollow">https://www.facebook.com/722677142/posts/pfbid035FWSEPuz8Yqe...</a>
> LeCun, 2022: Today's AI approaches will never lead to true intelligence (reported in the headline, not a verbatim quote); Marcus, 2018: “deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”<p>I swear same thing was being said 10+ years ago
Isn't this just a case of over-fitting? Recent LeCun has perhaps been over-fitted to Marcus's past writing. Maybe some augmentation (with new ideas) will resolve the issue?
Time will tell if we need symbolic representations or if continuous ones are sufficient. In the meantime, it would be more productive to present alternative methods or at least benchmarks where deep learning models are outperformed, instead of arguing about who said what first and criticising without offering quantitative evidence or alternatives
This is frustrating:<p>Consider this:<p>LeCun, 2022: Today's AI approaches will never lead to true intelligence (reported in the headline, not a verbatim quote); Marcus, 2018: “deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”<p>How can that be something that LeCun did not give Marcus credit for? It is borderline self evident, and people have been saying similar things since neural networks were invented. This would only be news if LeCun had said that "neural nets are all you need" (literally, not as a reference to the title of the transformers paper).<p>And furthermore, if LeCun <i>had</i> said that, there are literally dozens of people who have also said that you need to combine the approaches.<p>He cites a single line:'LeCun spent part of his career bashing symbols; his collaborator Geoff Hinton even more so, Their jointly written 2015 review of deep learning ends by saying that they “new paradigms are needed to replace rule-based manipulation of symbolic expressions.”'<p>Well, sure because symbol processing alone is not the answer either. We need to replace it with some hybrid. How is this a contradiction?<p>To summarize: people have been looking for a productive way to combine symbolic and statistical systems -- there are in fact many such systems proposed with varying degrees of success. LeCun agrees with this approach (no one has anything to lose by endorsing <i>adding</i> things to any model), but Marcus insists he came up with it and he should be cited.<p>Ugh.
Gary Marcus is the definition of petty. He brands himself as an ai skeptic but in reality he's just a clout chaser more obsessed with being right and his own image than anything else.<p>In his mind he is always right. Every single tweet he made, every single sentence he has said is never wrong. He is 100% right everyone else is 100% wrong.
As far as I know our brains are mostly unchanged for thousands of years. So any novel ideas anyone has are a result of standing on the shoulders of giants, idea-wise and technology-wise, so it seems rather silly to give any individual the lion's share of the credit for any new idea of any kind, anywhere.
I think I prefer the Emily Bender approach of asserting that no one should be allowed to train deep learning models at all. If you're going to claim some sort of authority over a technology you don't actually develop then you might as well go hard.