TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Meta Lesson

74 pointsby bibyteabout 6 years ago

6 comments

7373737373about 6 years ago
What confuses me about the state of the art of machine learning is that at both training and execution time, there is no notion of system resources.<p>These systems _cannot_ runtime-optimize themselves like the human brain, because attention, boredom, frustration and other mechanisms cannot arise &#x27;naturally&#x27; without such a notion.<p>Also, if the system can not infer its own boundaries (by feeding its output into itself, at least indirectly), it can not develop a notion of self and engage in meta-learning.<p>Given that the brain is a 20W system evolved towards learning things over a long time span in a changing environment, I&#x27;d really like to see an energy consumption comparison with current neural nets. With AutoML it may be necessary to separate network construction, feature learning and inference.<p>Is there a Moores law equivalent here? Maybe launch a competition: &quot;You are given ten 10 Watt-years, create the best system for this task (e.g. image segmentation)&quot;. With current electricity prices, training a system equivalent to a 10-year old brain will cost around $100 (not including I&#x2F;O and training environment).
anonytraryabout 6 years ago
&gt; Rodney Brooks (approx.): No, human ingenuity is actually responsible for progress in AI. We can’t just solve problems by throwing more compute at them.<p>I find this amusing considering nature just kept throwing more time at the universe until humans emerged. That said, I don&#x27;t think Brooks is wrong. I can&#x27;t recall where in the lectures this is, but I remember Feynman going on a rant about how poorly designed the human eye was. Human ingenuity really is an essential piece of the puzzle, seeing as Nature, in and of itself, isn&#x27;t very smart.
评论 #19481138 未加载
评论 #19485425 未加载
评论 #19481062 未加载
vicparaabout 6 years ago
The nice thing about philosophical views is that they don&#x27;t really matter. Whatever gets the ball rolling in beating the SotA we&#x27;ll accept it.<p>In science we shouldn&#x27;t have to abide by philosophical views. They only hurt. Theorems, proofs and the empirical evidence should fix it. Can you believe there was a time when the world was against Yann LeCun because everyone speculated neural networks don&#x27;t work for complex scenarios?<p>Both points of views Sutton&#x27;s, Brookes&#x27; are utter speculations as we cannot generalize or learn anything meaningful from them. Both of them are saying: &quot;Since I don&#x27;t know for sure how we&#x27;re going to improve the AI in the long run therefor he&#x27;s what I suspect the right approach is&quot;. They are even looking at the same history of AI is seeing different things. Go figure now.<p>In mathematics old tricks only get you so far. The hard problems in a particular moment in time can only be overcome by deploying new tricks. Why wouldn&#x27;t that be the case in the AI space?<p>Hard coded rules, got us started. Then more computation made the next step. Then we mixed the two. Then we created more purposefully handcrafted architectures such as CNN. Then we manually annotated millions of data points. Then GANs came along to fix some stability issues.<p>What&#x27;s the next trick now? Don&#x27;t worry, since no one knows both authors are just speculating.
gtr32xabout 6 years ago
Reading both of the posts make me believe that Sutton has stated a more global outlook in the progression of complexity than Brooks did, or that Brooks is simply trying to continue to encourage the current generation of AI research.<p>My naive take of each of their arguments, which are seemingly obvious but nonetheless profound:<p>Sutton: advancement in computation capacity &gt; specifically devised methods<p>Brooks: building specific tools help in solving the problem<p>You see, neither of them are wrong. However, what Brooks is arguing for is essentially - hey, we invented paper, but we have no computer yet, let&#x27;s make some line paper and graph paper to increase our productivity, hooray! Then what Sutton is saying is, dude, show me how your method will continue to be productive when computers are invented.<p>I do also want to propose my takeaway from these pieces though. From Brooks I take that building tools&#x2F;methods is essential to local optimization and tools&#x2F;methods can be extended to fit new global advancements. And to Sutton&#x27;s point, we are in a state of ever progression by the extension of the essence of Moore&#x27;s Law.
mannykannotabout 6 years ago
Sutton&#x27;s position might be described as Darwinian (our brains evolved with nothing but natural selection guiding the process), while Brook&#x27;s position might be called Chomskyan (we are born with a rich set of rules in place.) These are not, of course, mutually exclusive.
marmadukeabout 6 years ago
Perhaps a more meta lesson is that a single viewpoint is rarely sufficient.<p>I’d also be curious to see how hard researchers like these push their differences if publication was anonymous or like code, contributors are acknowledged but we refer to the project name, Linux not Torvalds et al