TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Telling AI model to “take a deep breath” causes math scores to soar in study

8 点作者 peterbonney超过 1 年前

1 comment

patleeman超过 1 年前
The actual prompt is just a chain of thought prompt and the headline is just clickbait.<p>Also in the article, the author gets COT wrong.<p>&gt; What &quot;reasoning&quot; they do (and &quot;reasoning&quot; is a contentious term among some, though it is readily used as a term of art in AI) is borrowed from a massive data set of language phrases scraped from books and the web. That includes things like Q&amp;A forums, which include many examples of &quot;let&#x27;s take a deep breath&quot; or &quot;think step by step&quot; before showing more carefully reasoned solutions. Those phrases may help the LLM tap into better answers or produce better examples of reasoning or solving problems from the data set it absorbed into its neural network weights.<p>Chain of thought has nothing to do with “tapping into better answers”. It’s simply asking the model to break up the output into smaller tasks and gives it more time and space to reason.<p>COT is not new or novel. Hell, it’s even listed in one of the guides in Open AI’s prompt guides as a strategy to improve prompts.
评论 #37579813 未加载