TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Why are language-models so bad at math?

1 pointsby dritaover 2 years ago
I&#x27;ve interacted with a number of &quot;sophisticated&quot; language-models like Chat-GPT, GPT-3, Jasper and others. They all fail at the most simple math questions. Sometimes they are not even able to count a list accurately and contradict themselves when asked the same question repeatedly.<p>I&#x27;ve looked at some resources to answer this question but nothing really explains why they sometimes do get the answer right or somewhat right and sometimes incredibly wrong.<p>I&#x27;m curious to hear from people with more domain knowledge.

1 comment

soueulsover 2 years ago
Because it’s primarily being fed a corpus of texts.<p>There is a finite number of articles on the internet.<p>There is an even more finite number of articles talking about Joan of Arc or copywriting.<p>But numbers are infinite.<p>Not many people write articles about why 2+17 is 19.<p>Not many people write about why 33 + 4 is 37 either.