I think us commenters are all on the same page here :)<p>The author is almost making it seem like models are reality and that people think that. They're not and I don't think anyone has ever thought they were...<p>Further and like other comments already mentioned, the brain is thought of and treated as a turing machine, not a digital computer. It's done this way, because the brain can be mapped to the definition of a turing machine.<p>And I have to defend Von Neumann. In his book, he explored turing equivalencies between the brain and computer concepts at the time used to implement the digital turing machine, he didn't actually think that the brain was a one-to-one mapping to a digital computer... He knew the difference between models and reality.<p>Even for the history of models the author mentions (hydraulics, automata, etc.), these all contain some turing equivalencies if implemented correctly and they were simply using the language and examples at the time to express this.<p>The author also continues to mangle any and all ideas of modeling, abstraction, and equivalence throughout the whole article. With regard to his 'uniqueness problem', I mean 'information loss' is modeled digitally for a reason.. just because humans are lossy, doesn't mean we can't model them that way. Think of a compressed image file.<p>I don't think there's a single researcher worth their salt that thinks the 'IP Metaphor' is gospel. That is just a grossly unscientific idea to assume.<p>We're all free to choose any model or collection of models we wish to approximate reality, but some of them work better than others and the brain is a complicated thing to model.<p>The author is trying to dramatize a triviality.