TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

I don't trust papers out of “Top Labs” anymore

94 pointsby zeebeeceeabout 3 years ago

14 comments

Isinlorabout 3 years ago
Eleuther.ai is just a bunch of random, but smart people without capital who decided on Twitter to recreate GPT-3.<p>Recently they released GPT-NeoX-20B. They mainly coordinate on Discord. They got compute from some company for free.<p><a href="https:&#x2F;&#x2F;www.eleuther.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.eleuther.ai&#x2F;</a><p>Another group called BigScience got a grant from France to use a public institution supercomputer to train large language model in open. They are 71% done training their 176 billion parameters open-source language model called &quot;BLOOM&quot;.<p>&gt; During one-year, from May 2021 to May 2022, 900 researchers from 60 countries and more than 250 institutions are creating together a very large multilingual neural network language model and a very large multilingual text dataset on the 28 petaflops Jean Zay (IDRIS) supercomputer located near Paris, France.<p><a href="https:&#x2F;&#x2F;bigscience.huggingface.co&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bigscience.huggingface.co&#x2F;</a><p>If there is a will there is a way.<p>BTW - People close to EleutherAI are looking for people wanting to play around with open-source machine learning for biology.<p>You just need to start contributing on their Discord: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;nc_znc&#x2F;status&#x2F;1530545001557643265" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;nc_znc&#x2F;status&#x2F;1530545001557643265</a>
评论 #31541319 未加载
saiojdabout 3 years ago
The real ugliness in jealousy comes from how it deceives the self. Consider the last part of this post:<p>&quot;Is this really what we&#x27;re comfortable with as a community? A handful of corporations and the occasional university waving their dicks at everyone because they&#x27;ve got the compute to burn and we don&#x27;t&quot;<p>I honestly think this kind of comment can only come from a place of jealousy. If someone is willing to spend a lot of money on an experiment, shouldn&#x27;t you be glad it was done? A scientific field is not an athletic competition, where the rules are picked to measure your &quot;worth as a competitor&quot;, and where the playing field has to be fair. The point is to move things forward. Many scientific fields have large, technical hurdles which require expensive equipment. If anything, computer science is a rare niche where it sometimes does not. If you want to build a career in a subfield where compute is important, you should do your best to get access to compute. If you are unable to do so while others are, then you might feel anger, shame, jealousy. But these feelings are really a problem you have with yourself, and not with the field of study.
评论 #31543336 未加载
Strilancabout 3 years ago
They explicitly say they trust the results. They&#x27;re complaining that top labs use lots of compute, so the results aren&#x27;t relevant to someone who can&#x27;t. They give an example where a paper used 18K TPU core hours. It&#x27;s easy to find papers that use millions of core hours.<p>IMO, asking AI people to not use expensive compute is like asking astronomers to please stop using expensive telescopes. The opposite side of this argument is &quot;Gee, it looks like increasing compute helps AI a lot. Why the heck have we been spending so little on compute?&quot; [0].<p>[0]: <a href="https:&#x2F;&#x2F;www.gwern.net&#x2F;Scaling-hypothesis" rel="nofollow">https:&#x2F;&#x2F;www.gwern.net&#x2F;Scaling-hypothesis</a>
评论 #31541392 未加载
评论 #31541891 未加载
ezoeabout 3 years ago
CIFAR-10 is consists of 10,000 test images. So 0.03% of CIFAR-10 is 3 images.<p>At this tiny number, the randomness is starting to affect the scores. Like labeling mistake of test data by human. Maybe, training SotA with different random seeds make its score 0.03% better or worse.<p>Hell, 17,810 TPU core-hours is a huge number. You can&#x27;t ignore the work of randomness. What if a cosmic ray hit a specific memory cell which cause the soft memory error, causing a single wrong calculation which ultimately cause the final trained model 0.03% difference?<p>So, it&#x27;s more like: &quot;Jeff Dean spent enough money to feed a family of four for half a decade to get a 0.03% of winning lottery on CIFAR-10.&quot;
评论 #31549568 未加载
togaenabout 3 years ago
&quot;Jeff Dean spent enough money to feed a family of four for half a decade to get a 0.03% improvement on CIFAR-10.&quot;<p>Nailed it.
评论 #31541156 未加载
评论 #31541164 未加载
phkahlerabout 3 years ago
It does seem like better algorithms to get similar results from smaller models should be prioritised.<p>Rather than throwing more compute at a problem for 0.03 better score, show me one tenth the compute with a loss of 0.03 score. That would be impressive and far more useful.
评论 #31541251 未加载
mjburgessabout 3 years ago
A modern &quot;AI&quot; models have c. 200bn parameters, say. At 32bit&#x2F;param that&#x27;s c. 6TB. At 6 bytes&#x2F;word, 1T words, or more words than are in all books that have ever been written.<p>NNs, and models of this kind, are just search engines. They store a compression of of everything ever written, and prediction is just googling through it.<p>Models performance exponential in parameter count should be just ignored by research. This category of performance is already established by research, more compute and more historical data stored, isnt an interesting research result.
评论 #31541913 未加载
dredmorbiusabout 3 years ago
<a href="https:&#x2F;&#x2F;teddit.net&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;uyratt&#x2F;d_i_dont_really_trust_papers_out_of_top_labs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;teddit.net&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;uyratt&#x2F;d_i_don...</a><p>(HN&#x27;s rewrite rule for Reddit apparently doesn&#x27;t catch unqualified domain references. Mods have been contacted.)
robertlagrantabout 3 years ago
I don&#x27;t trust CERN-based studies either. Anyone who needs a large hadron collider is just showing off.
评论 #31542759 未加载
randomifcpfanabout 3 years ago
Jeff Dean has responded on the original Reddit thread. Clarified the experiment’s purpose, results, and pointed out that researchers could conduct the experiment at a much lower cost than the OP’s estimate.
musicaleabout 3 years ago
Experimental results become credible when they can be reproduced consistently.<p>Theoretical results become more credible when they are independently verified.
j7akeabout 3 years ago
It’s even worse in biology where some labs consistently publish in Nature, Science, Cell. Some of the papers are outright fraudulent. Don’t even trust the numbers.<p>At least for ML you can mostly reproduce the results, even in if they’re not that interesting.
评论 #31549598 未加载
ta988about 3 years ago
You shouldn&#x27;t &quot;trust&quot; papers, stay critical and verify. Wherever they come from. There is a lot of politics, grad students eager to graduate so they cut corners, cheating PIs, cheatings statisticians... (I&#x27;ve witnessed each of these during my career). What you should trust is when things get built upon other works (from other groups) or when it simply gets reproduced. This does not eliminate the risk of fraud or error but greatly reduce it. The same way do not trust claims from companies based on a single paper especially if the company is run by one of the authors. Again it is just my limited experience but most of the ones I have seen were just full of overblown claims and they just hoped they could jump the ship before it got discovered.
avinasshabout 3 years ago
Jeff Dean responded to OP:<p>(The paper mentioned by OP is <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2205.12755" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2205.12755</a>, and I am one of the two authors, along with Andrea Gesmundo, who did the bulk of the work).<p>The goal of the work was not to get a high quality cifar10 model. Rather, it was to explore a setting where one can dynamically introduce new tasks into a running system and successfully get a high quality model for the new task that reuses representations from the existing model and introduces new parameters somewhat sparingly, while avoiding many of the issues that often plague multi-task systems, such as catastrophic forgetting or negative transfer. The experiments in the paper show that one can introduce tasks dynamically with a stream of 69 distinct tasks from several separate visual task benchmark suites and end up with a multi-task system that can jointly produce high quality solutions for all of these tasks. The resulting model that is sparsely activated for any given task, and the system introduces fewer and fewer new parameters for new tasks the more tasks that the system has already encountered (see figure 2 in the paper). The multi-task system introduces just 1.4% new parameters for incremental tasks at the end of this stream of tasks, and each task activates on average 2.3% of the total parameters of the model. There is considerable sharing of representations across tasks and the evolutionary process helps figure out when that makes sense and when new trainable parameters should be introduced for a new task.<p>You can see a couple of videos of the dynamic introduction of tasks and how the system responds here:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=THyc5lUC_-w" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=THyc5lUC_-w</a><p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=2scExBaHweY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=2scExBaHweY</a><p>I would also contend that the cost calculations by OP are off and mischaracterize things, given that the experiments were to train a multi-task model that jointly solves 69 tasks, not to train a model for cifar10. From Table 7, the compute used was a mix of TPUv3 cores and TPUv4 cores, so you can&#x27;t just sum up the number of core hours, since they have different prices. Unless you think there&#x27;s some particular urgency to train the cifar10+68-other-tasks model right now, this sort of research can very easily be done using preemptible instances, which are $0.97&#x2F;TPUv4 chip&#x2F;hour and $0.60&#x2F;TPUv3 chip&#x2F;hour (not the &quot;you&#x27;d have to use on-demand pricing of $3.22&#x2F;hour&quot; cited by OP). With these assumptions, the public Cloud cost of the computation described in Table 7 in the paper is more like $13,960 (using the preemptible prices for 12861 TPUv4 chip hours and 2474.5 TPUv3 chip hours), or about $202 &#x2F; task.<p>I think that having sparsely-activated models is important, and that being able to introduce new tasks dynamically into an existing system that can share representations (when appropriate) and avoid catastrophic forgetting is at least worth exploring. The system also has the nice property that new tasks can be automatically incorporated into the system without deciding how to do so (that&#x27;s what the evolutionary search process does), which seems a useful property for a continual learning system. Others are of course free to disagree that any of this is interesting.<p>Edit: I should also point out that the code for the paper has been open-sourced at: <a href="https:&#x2F;&#x2F;github.com&#x2F;google-research&#x2F;google-research&#x2F;tree&#x2F;master&#x2F;muNet" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;google-research&#x2F;google-research&#x2F;tree&#x2F;mast...</a><p>We will be releasing the checkpoint from the experiments described in the paper soon (just waiting on two people to flip approval bits, and process for this was started before the reddit post by OP).<p>---<p>source: <a href="https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;uyratt&#x2F;d_i_dont_really_trust_papers_out_of_top_labs&#x2F;iacwmpb&#x2F;" rel="nofollow">https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;uyratt&#x2F;d_i...</a>