>> But this program is representable by a neural net; after all, neural nets are turing complete. [1]<p>This is indeed evidence of an interesting phenomenon. It seems that many of the hare-brained things that people say lately are conclusions they have drawned starting from the premise that neural nets are somehow magickal and mysterious, and so they can do anything and everything anyone could imagine, and we don't even really need to come up with any other explanation about those wonders, than "it's a neural net!".<p>So, for example, the author can claim that "there’s some sort of fuzzy arithmetic engine at the heart of GPT-3", without having to explain what, exactly, is a "fuzzy arithmetic engine" (it's just "some sort" of thing, who cares?) and why we need such a device to explain the behaviour of a language model.<p>Then again, what's the point? People write stuff on the internets. Now we have language models trained on that nonsense. Things can only get worse.<p>_______________<p>[1] The link in the article points to a paper on the computational capabilities of Recurrent Neural Nets (RNNs), not "neural nets" in general. The Transformer architecture, used to train GPT-3's model is not an RNN architecture. In any case, the linked paper, and papers like it, only show that one can simulate any Turing machine by a specially constructed net. To <i>learn</i> a neural net that simulates any Turing machine (i.e. without hand-crafting) one would have to train it on Turing machines; and probably <i>all</i> Turing machines. GPT-3's model, besides not being an RNN, was trained on text, not Turing machines, so there's a few layers of strong assumptions needed before one can claim that it somehow, magickally, turned into a model of a Turing machine.<p>Anyway, the Turing-complete networks discussed in the linked paper, and similar work, inherit the undecidability of Universal Turing Machines and so it is impossible to predict the value of any activation function at any point in time. Which means that, if a neural net ever really went Turing complete, we wouldn't be able to tell whether its training has converged, or if it ever will. So that's an interesting paper- that the author clearly didn't read. I guess there's too many scary maths for a "layman". Claiming that GPT-3 has "some sort of fuzzy arithmetic engine" doesn't need any maths.