> four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ.<p>As we make AI better, perhaps we'll inadvertently find ways to make HI (human intelligence) better too.<p>I had a personal experience with this when I was studying for an exam recently. As I read over practice questions, I spoke aloud, replicating the reasoning methods/personality of Deepseek R1. By spending a lot of time reading long verbose R1 outputs, I've essentially fine-tuned my brain for reasoning tasks. I believe this method contributed to my excellent score on that exam.
"models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions"<p>One of the parts most worth a replication study.
I sometimes see these reddit threads of people talking about the experience of having an internal monologue. I have no such monologue, at least not one that is accessible to the part of my mind that calls itself 'me', but I have often wondered if that monologue is something like a 'chain of thought'. I feel like maybe without access to that 'idea feed' maybe my planning and executive functioning is less effective than some other people. I do find myself quite more effective with those sort of tasks when I do a little 'chain of thought' notepad.<p>I also suspect I spend less time ruminating and second-guessing myself and other anxious behaviours that I imagine would come with having someone talking in your ear all day, but that's probably off topic.
True, but a problem is that self-improving AI leads to a somewhat troubling mode of thinking. AIs switch to an internal babbling type language that makes no sense but clearly still conveys meaning to the AIs, then think in that language (if it's a language, though not sure what else it could be) and then produce correct results.<p>Worse, when you use multiple agents to get AI LLMs talking to one another, all AI agents switch to this internal language and they make progress despite no human understanding what hell is happening. This seems very bad.<p>Illustration:<p>> How many r in strawberry?<p>I'm asked how many r in strawberry. I can just spell the word and a;dklsjaw;
a;ewjraqwpeouypaads;lq
qepwiouryaqeopw
qewrpoiuyoiauysdqw145124rfa.nkjlwh
;45a8345a894ya4a
q4p58q45jaq;lkjas;dlfkja;j<p><answer>There are 3 (three) r's in strawberry</answer>
> four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ.<p>Based on what have they claimed that such methods are used by expert human problem solvers?