首页

2 条评论

aithrowawaycomm7 个月前
AI researchers need to read more cognitive science. It is genuinely embarrassing how often you see &quot;Thinking Fast and Slow&quot; + some 50-year-old paper as the only citations, because this statement:<p><pre><code> In human cognition theory, human thinking is governed by two systems: the fast and intuitive System 1 and the slower but more deliberative System 2. </code></pre> is intuitive, psychologically seductive, and <i>blatantly wrong.</i>[1] There is no scientific distinction between System 1 and System 2, the very idea is internally incoherent and contradicts the evidence. Yet tons of ignorant people believe it. And apparently AI researchers sincerely believe &quot;ANN inference = System 1 thinking.&quot; This is ridiculous: ANN inference = Pavlovian response, as found in nematodes and jellyfish. But System 1 thinking is related to common sense found in all vertebrates, and absent from all existing AI. We don&#x27;t have a clue how to make a computer capable of System 1 thinking.<p>This isn&#x27;t just pedantry: the initial &quot;System 1 = inference&quot; error makes &quot;System 2 = chain-of-thought&quot; especially flawed. CoT in transformer LLMs helps solve O(n) problems but struggles with O(n^2). The observation that a O(n^2) problem can be broken down into n separate O(n) problems is ultimately due to <i>system 1</i> reasoning: it is obviously true. But it is only obviously true to smart things like humans and pigeons. Transformers do not seem smart enough to grasp it: system 2 thinking must be &quot;glued together&quot; by tautologies or axioms, and we can only recognize tautologies or discover axioms because of system 1. If the problem is more complex than O(n) these tautologies and axioms must be provided to the LLM, either with a careful prompt or exhaustive data.<p>Kahneman&#x27;s book has been largely repudiated on the science. That doesn&#x27;t mean it isn&#x27;t a useful way to understand the kinds of errors humans make in decision-making. But it does make the book useless for AI researchers: I believe AGI is well over 200 years away, because going all the way back to Alan Turing AI has simply refused to engage with the challenges of cognitive science, preferring fairy tales which confirm intuitions and trivialize human minds.<p>[1] <a href="https:&#x2F;&#x2F;www.cell.com&#x2F;trends&#x2F;cognitive-sciences&#x2F;abstract&#x2F;S1364-6613(18)30024-X" rel="nofollow">https:&#x2F;&#x2F;www.cell.com&#x2F;trends&#x2F;cognitive-sciences&#x2F;abstract&#x2F;S136...</a> and <a href="https:&#x2F;&#x2F;www.psychologytoday.com&#x2F;intl&#x2F;blog&#x2F;a-hovercraft-full-of-eels&#x2F;202103&#x2F;the-false-dilemma-system-1-vs-system-2?amp" rel="nofollow">https:&#x2F;&#x2F;www.psychologytoday.com&#x2F;intl&#x2F;blog&#x2F;a-hovercraft-full-...</a>
评论 #41965459 未加载
评论 #41965287 未加载
评论 #41966490 未加载
评论 #41970152 未加载
评论 #41968771 未加载
评论 #41966138 未加载
hislaziness7 个月前
As I understand, the LLM uses the techniques of searchformer - <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.14083" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.14083</a>. To do &quot;slow thinking&quot; doing a A* search using a transofrmer.