You can tell this article was written by someone who doesn't follow artificial intelligence and neural networks.<p>How? Because people in the field of neural networks and AI would never claim that Minsky "pioneered neural networks". To the contrary (and as Minsky's wikipedia article – i'm sure the source of this claim – obliquely notes), Minsky's pessimism about the abilities of neural network computing lead to the abandonment of artificial neural networks as a major research topic.<p>That alone should make one skeptical about this author's depth of knowledge about artificial intelligence.<p>Beyond that, this article and the quotes therein, are just flat out incorrect. There are people who are attempting to analyze behavior, model it, and build systems that mimic this behavior. They're called cognitive scientists. This approach is taken by linguists, psychologists and philosophers all.<p>But this stuff is incredibly difficult to analyze, let alone model correctly. It annoys me to hear the opinions of the panelists reduced to "oh gee, why isn't anyone doing more holistic research".<p>When i read the actual quotes by Minsky, Partee and Chomsky, i hear the three things i expected to hear, and that each academic has been saying for years.<p>1) Chomsky, an old school linguist, doesn't like systems that we can't introspect and verify as correctly modeling human behavior.
2) Partee, who is responsible for recognizing the power and importance of Montague Semantics and linguistic pragmatics, states that AI requires world/state modeling that is equivalent in complexity to that required for robust natural language processing (a position i agree with)
3) Minsky thinks nobody is trying hard enough, and that the constraints put on researchers from actual implementation has lead us down a blind alley.<p>Lastly, Sydney Brenner complains that neuroscientists can't see the forest for the trees. I guess he's not familiar with all the research in cognitive psychology, trying to model cognitive facilities like memory, language use, decision making, attention switching and more.<p>That we haven't "solved" AI or made thinking machines is a misleading claim that is contrary to all of the awesome stuff that humans have built in the past 10 years. Look at all of the stuff that Google has built and tell me that we don't have thinking machines that can understand (or if you'd like to be more circumspect, predict) what we want. Tell me that Watson wasn't a marvel of not just engineering but modeling intelligence.<p>The major editorial thrust of this article is an incorrect platitude, which isn't supported by reality or the assertions and claims made by the panelists (whom i each respect for the work they have contributed to the broader field of cognitive science), and it annoys me that this claptrap pastiche is being passed on as journalism.<p>We have made progress, and we will continue to make progress.