I thought that article was going to be about Leibniz, and his "let us calculate" approach to decision making.<p>"What the Frog's Eye Tells the Frog's Brain" (<a href="http://neuromajor.ucr.edu/courses/WhatTheFrogsEyeTellsTheFrogsBrain.pdf" rel="nofollow">http://neuromajor.ucr.edu/courses/WhatTheFrogsEyeTellsTheFro...</a>) is still worth reading. It's the first paper on what is now called "early vision".<p>I'm painfully familiar with that world view. I went through Stanford CS in 1983-1985, when logic-based AI was, in retrospect, having its last gasp. I took "Dr. John's Mystery Hour", Epistemological Problems in Artificial Intelligence, from John McCarthy. The logicians were making progress on solving problems once they'd been hammered into just the right predicate calculus form, but were getting nowhere in translating the real world into predicate calculus.<p>For computer program verification, though, that stuff works. For a time, I was fascinated by Boyer-Moore theory and their theorem prover. They'd redone Russell and Whitehead with machine proofs. Constructive mathematics maps well to what computers can do. I got the Boyer-Moore theorem prover (powerful, could do induction, but slow) hooked up to the Oppen-Nelson theorem prover (limited, only does arithmetic up to multiplication by constants, but fast) and used the combination to build a usable proof-of-correctness system for a dialect of Pascal. It worked fine; I used to invite people to put in a bug in a working program and watch the system find it.<p>But it was clear that approach wasn't going to map to the messiness of the real world. Working on proof of correctness for real programs made it painfully clear how brittle formal logic systems are. Nobody was going to get to common sense that way. The logicians were in denial about this for a long time, which resulted in the "AI winter" from 1985 to 2000 or so.<p>Then came the machine learning guys, and progress resumed. Science progresses one funeral at a time.