TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts (2017)

60 点作者 georgehill超过 1 年前

4 条评论

filterfiber超过 1 年前
&gt; Previous State-of-the-Art: [...] The number of parameters in the LSTM layers of these models vary from 2 million to 151 million.<p>&gt; We present model architectures in which a MoE with up to 137 billion parameters<p>Back in 2017 most models were well under 1B, GPT2 (2019) was one of the first &quot;big&quot; non-MOE models at 1.5B in size. People weren&#x27;t sure how well&#x2F;much they would scale.<p>The CoralAI TPU has a mere 8 MB of SRAM in 2019!<p>GPT3 was 175B in 2020.<p>Now nearly all LLM&#x27;s are at minimum 1B, but dense 70B is now common.
评论 #38574779 未加载
dang超过 1 年前
Discussed at the time:<p><i>Outrageously Large Neural Networks: The Sparsely-Gated Mixture-Of-Experts Layer</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13518039">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13518039</a> - Jan 2017 (81 comments)<p><i>Outrageously large neural networks: the sparsely-gated mixture-of-experts layer</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12963364">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12963364</a> - Nov 2016 (2 comments)
gryfft超过 1 年前
[2017]
评论 #38573015 未加载
评论 #38573502 未加载
legel超过 1 年前
In response to a very impressive set of LLM open weights released today from Mistral AI called &quot;Mixtral 8x7B&quot;, I was reminded of this amazing publication on the origin of &quot;sparse mixture of experts&quot; from none other than Geoffrey Hinton and Jeff Dean.<p>The Sparse Mixture of Experts neural network architecture is actually an absolutely brilliant move here. It scales fantastically, when you consider that (1) GPU RAM is way too expensive, in financial dollars, (2) SSD &#x2F; CPU RAM are relatively cheap, and (3) you can have &quot;experts&quot; running on their own computers, i.e. it&#x27;s a natural distributed computing partitioning strategy for neural networks.<p>I did my M.S. thesis on large-scale distributed deep neural networks in 2013 and can say that I&#x27;m delighted to point out where this came from.<p>In 2017, it emerged from a Geoffrey Hinton &#x2F; Jeff Dean &#x2F; Quoc Le publication called &quot;Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer&quot;.<p>Here is the abstract: &quot;The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.&quot;<p>So, here&#x27;s a big A.I. idea for you: what if we all get one of these sparse Mixture of Experts (MoEs) that&#x27;s a 100 GB on our SSDs, contains all of the &quot;outrageously large&quot; neural network insights that would otherwise take specialized computers, and is designed to run effectively on a normal GPU or even smaller (e.g. smartphone)?