In my view, this is the endgame, really. Take any numerical technique, at the level of computers we always work with discrete bits. So you can reformulate any numerical problem (such as a problem of finding a probability distribution) on floats in terms of operations on individual bits, i.e. as a purely symbolic calculation.<p>However, doing so can very quickly lead to intractable problems of resolving satisfiability. So until we either manage to tame NP problems somehow (either by generating only easy instances, or by proving P=NP), we will always have to add some linearity assumptions (i.e. use numerical quantities) somewhere, and it will always be a bit of a mystery whether it actually helped to solve the problem or not.<p>In other words, we use statistics to overcome (inherent?) intractability, but in the process we add bias (as a trade-off). This is not necessarily bad, since it can help to actually solve a real problem. However, for any new problem, we will have understand the trade-offs again.