The precision argument is so amazingly bogus that I won't refute it except with a link[1], so here are refutations of some other parts:<p>> My kids sometimes ask me how high I can count. I’ve noticed that they stop asking this question once they reach a certain age, usually around six or seven. This is because the question does not make sense once you understand what a number is. If there’s a single highest number you can count to, you don’t really grok numbers. The difference between computers and humans doing math is a bit like the difference between the younger kids who think that “how high you can count” is a real thing and the older kids who have successfully understood how numbers work.<p>There actually is an answer to this; most people's short term memory for digits is less than a dozen, so the author likely can't count higher than 1e12, as they won't be able to go from e.g. 374841483741 to 374841483742 without getting one of the digits in the middle wrong.<p>> The representation of the numbers that occurs only in the mind of the human is conflated with the execution of a particular program that takes place in the computer<p>This merely argues that computer math can be different than human math, which even that I'm not quite going to concede, since it would imply e.g. that the mathematicians behind the IEEE 754 standard were unaware of the implications of
floating point arithmetic.<p>> To us the transition between the theoretical and the actual happens almost instantly and unnoticeably.<p>Indeed it is happening almost instantly and (being generous to the author) without the author noticing throughout this paper, conflating concepts to the point where one could demonstrate almost anything is beyond the reach of computers, even things that are clearly well within the reach of them!<p>1: <a href="https://github.com/stylewarning/computable-reals">https://github.com/stylewarning/computable-reals</a>