It's worse than that. Neural network models are a pile of biases that (sometimes) seem to understand because (1) biases are on balance partially true, (2) our own understanding is rife with biases, (3) we tend to see ourselves mirrored in the environment.<p><a href="https://www.themarysue.com/things-that-look-like-faces-pareidolia/" rel="nofollow">https://www.themarysue.com/things-that-look-like-faces-parei...</a><p>Just like all of the other fundamental principles of computer science have been abandoned by the neural net cult, basic principles of cybernetics such as Ashby's Law are forgotten<p><a href="https://www.edge.org/response-detail/27150" rel="nofollow">https://www.edge.org/response-detail/27150</a><p>Real understanding involves multiple dimensions but one of them is a process like SAT solving that checks the consistency of an "understanding" vs the system's database of world knowledge. Even in the early 1970s the symbolic AI community had some understanding of what the "gap" was, today there is blind faith that if you throw enough computational power at it, neural networks will overcome, with no consideration of what structural features are necessary.