TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

4T transistors, one giant chip (Cerebras WSE-3) [video]

118 pointsby asdfasdf1about 1 year ago

24 comments

cs702about 1 year ago
According to the company, the new chip will enable training of AI models with up to 24 trillion parameters. Let me repeat that, in case you&#x27;re as excited as I am: <i>24. Trillion. Parameters.</i> For comparison, the largest AI models currently in use have around 0.5 trillion parameters, around 48x times smaller.<p>Each parameter is a <i>connection between artificial neurons</i>. For example, inside an AI model, a linear layer that transforms an input vector with 1024 elements to an output vector with 2048 elements has 1024×2048 = ~2M parameters in a weight matrix. Each parameter specifies by how much each element in the input vector contributes to or subtracts from each element in the output vector. Each output vector element is a weighted sum (AKA a linear combination), of each input vector element.<p>A human brain has an estimated 100-500 trillion synapses connecting biological neurons. Each synapse is quite a complicated biological structure[a], but if we oversimplify things and assume that every synapse can be modeled as a single parameter in a weight matrix, then the largest AI models in use today have approximately 100T to 500T ÷ 0.5T = 200x to 1000x fewer connections between neurons that the human brain. If the company&#x27;s claims prove true, this new chip will enable training of AI models that have only 4x to 20x fewer connections that the human brain.<p>We sure live in interesting times!<p>---<p>[a] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Synapse" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Synapse</a>
评论 #39695212 未加载
评论 #39696255 未加载
评论 #39696999 未加载
brucethemoose2about 1 year ago
Reposting the CS-2 teardown in case anyone missed it. The thermal and electrical engineering is absolutely nuts:<p><a href="https:&#x2F;&#x2F;vimeo.com&#x2F;853557623" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;853557623</a><p><a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230812020202&#x2F;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=pzyZpauU3Ig" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20230812020202&#x2F;https:&#x2F;&#x2F;www.youtu...</a><p>(Vimeo&#x2F;Archive because the original video was taken down from YouTube)
评论 #39696278 未加载
评论 #39694631 未加载
评论 #39696617 未加载
评论 #39695129 未加载
fxjabout 1 year ago
It has its own programming language CSL<p><a href="https:&#x2F;&#x2F;www.cerebras.net&#x2F;blog&#x2F;whats-new-in-r0.6-of-the-cerebras-sdk" rel="nofollow">https:&#x2F;&#x2F;www.cerebras.net&#x2F;blog&#x2F;whats-new-in-r0.6-of-the-cereb...</a><p>&quot;CSL allows for compile time execution of code blocks that take compile-time constant objects as input, a powerful feature it inherits from Zig, on which CSL is based. CSL will be largely familiar to anyone who is comfortable with C&#x2F;C++, but there are some new capabilities on top of the C-derived basics.&quot;<p><a href="https:&#x2F;&#x2F;github.com&#x2F;Cerebras&#x2F;csl-examples">https:&#x2F;&#x2F;github.com&#x2F;Cerebras&#x2F;csl-examples</a>
评论 #39694853 未加载
评论 #39702257 未加载
RetroTechieabout 1 year ago
If you were to add up all transistors fabricated worldwide, up until &lt;year&gt;, such that total roughly matches the # on this beast, what year would you arrive? Hell, throw in discrete transistors if you want.<p>How many early supercomputers &#x2F; workstations etc would that include? How much progress did humanity make using all those early machines (or <i>any</i> transistorized device!) combined?
评论 #39696569 未加载
ortusduxabout 1 year ago
Not trying to sound critical, but is there a reason to use 4B,000 vs 4T?
评论 #39695078 未加载
评论 #39697206 未加载
评论 #39694832 未加载
评论 #39693853 未加载
评论 #39695670 未加载
评论 #39693863 未加载
imbusy111about 1 year ago
I wish they dug into how this monstrosity is powered. Assuming 1V and 24kW, that&#x27;s 24kAmps.
评论 #39702285 未加载
评论 #39693991 未加载
asdfasdf1about 1 year ago
<a href="https:&#x2F;&#x2F;www.cerebras.net&#x2F;press-release&#x2F;cerebras-announces-third-generation-wafer-scale-engine" rel="nofollow">https:&#x2F;&#x2F;www.cerebras.net&#x2F;press-release&#x2F;cerebras-announces-th...</a><p><a href="https:&#x2F;&#x2F;www.cerebras.net&#x2F;product-chip&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cerebras.net&#x2F;product-chip&#x2F;</a>
Rexxarabout 1 year ago
Is there a reason it&#x27;s not roughly a disc if they use the whole wafer ? They could have 50% more surface.
评论 #39697438 未加载
评论 #39696234 未加载
modelessabout 1 year ago
As I understand it, WSE-2 was kind of handicapped because its performance could only really be harnessed if the neural net fit in the on-chip SRAM. Bandwidth to off-chip memory (normalized to FLOPS) was not as high as Nvidia. Is that improved with WSE-3? Seems like the SRAM is only 10% bigger, so that&#x27;s not helping.<p>In the days before LLMs 44 GB of SRAM sounded like a lot, but these days it&#x27;s practically nothing. It&#x27;s possible that novel architectures could be built for Cerebras that leverage the unique capabilities, but the inaccessibility of the hardware is a problem. So few people will ever get to play with one that it&#x27;s unlikely new architectures will be developed for it.
评论 #39699673 未加载
评论 #39698979 未加载
imtringuedabout 1 year ago
One thing I don&#x27;t understand about their architecture is that they have spent so much effort building this monster of a chip, but if you are going to do something crazy, why not work on processing in memory instead? At least for transformers you will primarily be bottlenecked on matrix multiplication and almost nothing else, so you only need to add a simple matrix vector unit behind your address decoder and then almost every AI accelerator will become obsolete over night. I wouldn&#x27;t suggest this to a random startup though.
评论 #39697191 未加载
marmadukeabout 1 year ago
Hm, let&#x27;s wait and see what the gemm&#x2F;W perf is, and how many programmer hours it takes to implement say an mlp. Wafer scale data flow may not be a solved problem?
tivertabout 1 year ago
Interesting. I know there&#x27;s a lot of attempts to hobble China by limiting their access to cutting edge chips and semiconductor manufacturing technology, but could something like this be a workaround for them, at least for datacenter-type jobs?<p>Maybe it wouldn&#x27;t be as powerful as one of these, due to their less capable fabs, but something that&#x27;s good enough to get the job done in spite of the embargoes.
评论 #39694793 未加载
asdfasdf1about 1 year ago
WHITE PAPER Training Giant Neural Networks Using Weight Streaming on Cerebras Wafer-Scale Clusters<p><a href="https:&#x2F;&#x2F;f.hubspotusercontent30.net&#x2F;hubfs&#x2F;8968533&#x2F;Virtual%20Booth%20Docs&#x2F;CS%20Weight%20Streaming%20White%20Paper%20111521.pdf" rel="nofollow">https:&#x2F;&#x2F;f.hubspotusercontent30.net&#x2F;hubfs&#x2F;8968533&#x2F;Virtual%20B...</a>
asdfasdf1about 1 year ago
- Interconnect between WSE-2&#x27;s chips in the cluster was 150GB&#x2F;s, much lower than NVIDIA&#x27;s 900GB&#x2F;s.<p>- non-sparse fp16 in WSE-2 was 7.5 tflops (about 8 H100s, 10x worse performance per dollar)<p>Does anyone know the WSE-3 numbers? Datasheet seems lacking loads of details<p>Also, 2.5 million USD for 1 x WSE-3, why just 44GB tho???
评论 #39696230 未加载
评论 #39694503 未加载
评论 #39697341 未加载
评论 #39694995 未加载
holodukeabout 1 year ago
Better sell all nvidia stocks. Once these chips are common there is no need anymore for GPUs in training super large AI models.
评论 #39696074 未加载
评论 #39695880 未加载
评论 #39697053 未加载
TradingPlacesabout 1 year ago
Near-100% yield is some dark magic.
apiabout 1 year ago
I&#x27;m surprised we haven&#x27;t seen wafer scale many-core CPUs for cloud data centers yet.
评论 #39702266 未加载
beautifulfreakabout 1 year ago
So it&#x27;s increased from 2.6 to 4 trillion transistors over the previous version.
tedivmabout 1 year ago
The missing numbers that I really want to see-<p>* Power Usage<p>* Rack Size (last one I played with was 17u)<p>* Cooling requirements
tibbydudezaabout 1 year ago
Wow - it as bigger than my kitchen tiles - who uses them ???. NSA ???.
pgrafabout 1 year ago
related discussion (2021): <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27459466">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27459466</a>
hashtag-tilabout 1 year ago
Any idea on what’s the yield on these chips?
评论 #39697921 未加载
wizardforhireabout 1 year ago
But can it run doom?
评论 #39695114 未加载
评论 #39697345 未加载
评论 #39696196 未加载
AdamH12113about 1 year ago
Title should be either &quot;4,000,000,000,000 Transistors&quot; (as in the actual video title) or &quot;4 Trillion Transistors&quot; or maybe &quot;4T Transistors&quot;. &quot;4B,000&quot; (&quot;four billion thousand&quot;?) looks like 48,000 (forty-eight thousand).
评论 #39695580 未加载