The question comes down to what the research project is. Are we modelling intelligence, or are we simply trying to find a solution to a specific problem?<p>I take intelligence to be "what you do when you do not know what you're doing", or equivalently, a class of algorithms which build conceptualisations of an environment, of one's capacities, (and so on), which "furnishes the mind" with the appropriate building blocks to begin the traditional narrow/weak AI-style algorithmic techniques.<p>Few, if any departments are actually researching intelligence in this sense (I'd say a few peripheral areas in zoological neuroscience, etc.).<p>In this "specific problem case" brute-forcing a well-defined outcome space is obviously superior to any intelligence-based strategy. Intelligence, in this sense, is an extremely expensive waste of time.<p>This isn't much of a bitter lesson, except to those who naively think intelligence is actually a superior method of solving problems. In general, it's a terrible method. Only of course, when the problem has effectively been solved by being able to provide a well-defined outcome space.<p>Recently, Yann LeCun said (iirc via linkedin) that the frontal lobe engages in conceptualisation and the sensory-motor system is action-planning. If that's your view, then you still havent learned this bitter lesson.<p>The situation is exactly the opposite. Cognition, as performed by the frontal lobe, is what you do when you have the right "furniture of the mind" (aproximate outcome spaces, etc.). It is the body, via the sensory-motor system, which must expend great expense to get you there.<p>This is why computer scientists are every disappointed by brute-search. They're in the wrong research project.