TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Zero-3 Offload: Scale DL models to trillion parameters without code changes

97 点作者 ghosthamlet大约 4 年前

13 条评论

FL33TW00D大约 4 年前
Huggingface has been working on implementing this into their library, and it has some pretty amazing effects on the size of models you can train on a simple Colab.<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;zero-deepspeed-fairscale" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;zero-deepspeed-fairscale</a>
stephenroller大约 4 年前
Support for this was also added to [Fairscale](<a href="https:&#x2F;&#x2F;fairscale.readthedocs.io&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fairscale.readthedocs.io&#x2F;en&#x2F;latest&#x2F;</a>) and [Fairseq](<a href="https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;fairseq" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;fairseq</a>) last week. In particular, the Fairscale implementation can be used in any pyotrch project without requiring the use of the Deepspeed trainer.
评论 #26449737 未加载
ansk大约 4 年前
Question for someone knowledgable about this: if I have a model which is large -- but small enough that I can fit a single training example on GPU -- does this approach offer speedups compared to simple gradient accumulation? Or is this only useful for models which are so large that the model parameters themselves are overwhelming GPU memory?
joshlk大约 4 年前
GPT-NeoX is an example project that is using deepspeed and Zero-3 offloading. The wider project intend to train a GPT-3 sized model and release it freely to the world.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;EleutherAI&#x2F;gpt-neox" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;EleutherAI&#x2F;gpt-neox</a>
评论 #26447544 未加载
dataangel大约 4 年前
ELI5? All this techno babble just sounds like &quot;it&#x27;s faster because we optimized it&quot;. What are the nontrivial, new fundamental tricks?
评论 #26450397 未加载
评论 #26448632 未加载
bevenky大约 4 年前
This is also being added to pytorch<p><a href="https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;pytorch&#x2F;pull&#x2F;46750" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;pytorch&#x2F;pull&#x2F;46750</a>
评论 #26450637 未加载
alphagrep12345大约 4 年前
Simple 10 min overview&#x2F;tutorial (official) if someone is interested - <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ovQC7FqXHXk" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ovQC7FqXHXk</a>
The_rationalist大约 4 年前
See also zeroth order backpropagation which allows 300X faster training while not reducing throughput that much <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2011.08895" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2011.08895</a> How much zero-3 affect accuracy?<p>See also <a href="https:&#x2F;&#x2F;github.com&#x2F;microsoft&#x2F;fastformers" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;microsoft&#x2F;fastformers</a>
vladf大约 4 年前
Alternatively, one could get rid of the memory used by optimizers entirely by switching to vanilla SGD.<p>I haven’t tried this on transformers and maybe that’s what breaks down here but in “classic” supervised settings I’ve found SGD with schedule tuning just as fast as Adam.
评论 #26448858 未加载
andrewprock大约 4 年前
How much data do you need to mitigate the risk of over fitting a trillion parameter model?
评论 #26448876 未加载
singhrac大约 4 年前
For those searching, DeepSpeed is implemented as a set of C++&#x2F;CUDA extensions on top of PyTorch (compiled using their JIT).
bionhoward大约 4 年前
please hook this up to Jax!
mchusma大约 4 年前
This is super impressive. I could not figure out for a while who exactly was running this project, but it looks like its Microsoft. Great work!