TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Bitter Lesson (2019)

45 pointsby pierreabout 1 year ago

2 comments

mjburgessabout 1 year ago
The question comes down to what the research project is. Are we modelling intelligence, or are we simply trying to find a solution to a specific problem?<p>I take intelligence to be &quot;what you do when you do not know what you&#x27;re doing&quot;, or equivalently, a class of algorithms which build conceptualisations of an environment, of one&#x27;s capacities, (and so on), which &quot;furnishes the mind&quot; with the appropriate building blocks to begin the traditional narrow&#x2F;weak AI-style algorithmic techniques.<p>Few, if any departments are actually researching intelligence in this sense (I&#x27;d say a few peripheral areas in zoological neuroscience, etc.).<p>In this &quot;specific problem case&quot; brute-forcing a well-defined outcome space is obviously superior to any intelligence-based strategy. Intelligence, in this sense, is an extremely expensive waste of time.<p>This isn&#x27;t much of a bitter lesson, except to those who naively think intelligence is actually a superior method of solving problems. In general, it&#x27;s a terrible method. Only of course, when the problem has effectively been solved by being able to provide a well-defined outcome space.<p>Recently, Yann LeCun said (iirc via linkedin) that the frontal lobe engages in conceptualisation and the sensory-motor system is action-planning. If that&#x27;s your view, then you still havent learned this bitter lesson.<p>The situation is exactly the opposite. Cognition, as performed by the frontal lobe, is what you do when you have the right &quot;furniture of the mind&quot; (aproximate outcome spaces, etc.). It is the body, via the sensory-motor system, which must expend great expense to get you there.<p>This is why computer scientists are every disappointed by brute-search. They&#x27;re in the wrong research project.
sgt101about 1 year ago
The standard reading is : &quot;hardware wins&quot;.<p>But there is a second part.<p>&quot;The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.&quot;<p>This is proving quite tendentious at the moment. Do we want agents that discover like we can? The clattering impact of rogue models, even when they are being used just to make pretty pictures - as opposed to engineering diagrams, patient care plans, fraud investigations.... is making me, and many others wonder. In fact some of the projects that are supporting the discovery of precision like alphafold might be the most promising?