The best way of looking at problems like this, is that it's an exponential process. The number of values you can represent with n digits increases exponentially. Each additional digit increases your precision by a factor of 10. If you have 15 digits, well imagine multiplying 10 over and over again 15 times, it's pretty big.<p>The word "quadrillion" is rarely used in the English language. Because it's very rare you need numbers that large. And when you do, being off by a few digits doesn't matter. Calculators commonly only display up to 8-10 digits, for example.<p>This applies to programming, since computers often only have a limited number of bits. Programmers often complain about floating point. One of the things about neural networks is that they don't actually need that many bits of precision, since they are by nature very "fuzzy". We can build computers that are bigger/cheaper by sacrificing a lot of bits.<p>But one of the problems is, when adding a bunch of small numbers together, it rounds to the nearest whole number every time. And the inaccuracy builds up. So to really take advantage of less precision, we need to somehow build computers that can do <i>stochastic rounding</i>, where they sometimes round up, and sometimes round down, so the expected output is the same.