The actual prompt is just a chain of thought prompt and the headline is just clickbait.<p>Also in the article, the author gets COT wrong.<p>> What "reasoning" they do (and "reasoning" is a contentious term among some, though it is readily used as a term of art in AI) is borrowed from a massive data set of language phrases scraped from books and the web. That includes things like Q&A forums, which include many examples of "let's take a deep breath" or "think step by step" before showing more carefully reasoned solutions. Those phrases may help the LLM tap into better answers or produce better examples of reasoning or solving problems from the data set it absorbed into its neural network weights.<p>Chain of thought has nothing to do with “tapping into better answers”. It’s simply asking the model to break up the output into smaller tasks and gives it more time and space to reason.<p>COT is not new or novel. Hell, it’s even listed in one of the guides in Open AI’s prompt guides as a strategy to improve prompts.