TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why are language-models so bad at math?

1 点作者 drita超过 2 年前
I&#x27;ve interacted with a number of &quot;sophisticated&quot; language-models like Chat-GPT, GPT-3, Jasper and others. They all fail at the most simple math questions. Sometimes they are not even able to count a list accurately and contradict themselves when asked the same question repeatedly.<p>I&#x27;ve looked at some resources to answer this question but nothing really explains why they sometimes do get the answer right or somewhat right and sometimes incredibly wrong.<p>I&#x27;m curious to hear from people with more domain knowledge.

1 comment

soueuls超过 2 年前
Because it’s primarily being fed a corpus of texts.<p>There is a finite number of articles on the internet.<p>There is an even more finite number of articles talking about Joan of Arc or copywriting.<p>But numbers are infinite.<p>Not many people write articles about why 2+17 is 19.<p>Not many people write about why 33 + 4 is 37 either.