I've interacted with a number of "sophisticated" language-models like Chat-GPT, GPT-3, Jasper and others. They all fail at the most simple math questions. Sometimes they are not even able to count a list accurately and contradict themselves when asked the same question repeatedly.<p>I've looked at some resources to answer this question but nothing really explains why they sometimes do get the answer right or somewhat right and sometimes incredibly wrong.<p>I'm curious to hear from people with more domain knowledge.
Because it’s primarily being fed a corpus of texts.<p>There is a finite number of articles on the internet.<p>There is an even more finite number of articles talking about Joan of Arc or copywriting.<p>But numbers are infinite.<p>Not many people write articles about why 2+17 is 19.<p>Not many people write about why 33 + 4 is 37 either.