TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Tesla Packs 50B Transistors onto D1 Dojo Chip

84 pointsby crishojover 3 years ago

12 comments

ksecover 3 years ago
Big Numbers are good for headline. But it doesn&#x27;t put anything in context.<p>Die Size is 645mm^2 on a 7nm. This is important because we know the reticle limit which is around ~800mm^2.<p>The Nvidia AI Chip has 54 billion transistors with a die size of 826 mm2 on 7nm.<p>I recently saw a Ted Talk, If Content is King, then Context is God. I think it capture everything that is wrong in today&#x27;s society.
评论 #28338906 未加载
评论 #28338931 未加载
TekMolover 3 years ago
The article starts with this statement:<p><pre><code> Artificial intelligence (AI) has seen a broad adoption over the past couple of years. </code></pre> And continues:<p><pre><code> At Tesla, who as many know is a company that works on electric and autonomous vehicles, AI has a massive value to every aspect of the company&#x27;s work. </code></pre> Who is writing like this? And why?<p>What would Tom&#x27;s Hardware lose if they left out this type of cheap fillwords?<p>Should I also start writing like this?<p>Is this type of &quot;reader hostile writing&quot; a new thing or have newspapers always written like this?<p>These are not rhetorical questions. I am honestly confused.
评论 #28339476 未加载
评论 #28339285 未加载
评论 #28339258 未加载
评论 #28340343 未加载
Lioover 3 years ago
This is made using TSMC&#x27;s 7nm fab process so surly the number of transistors in this chip is either enabled or limited by that process, isn&#x27;t it?<p>Honest question, how much is chip design a factor separate to fab process?
评论 #28338705 未加载
greesilover 3 years ago
I&#x27;m always curious about the decision-making progress when someone decides to make their own ASIC when there are somewhat reasonable commercial alternatives. What was the advantage here for Tesla?
评论 #28339457 未加载
评论 #28339301 未加载
评论 #28339349 未加载
评论 #28340762 未加载
minhazmover 3 years ago
Tesla actually has a lot of expertise in chip design in Pete Bannon and formerly Jim Keller. I think most people know who Jim Keller is, but if not you can read his wikipedia[1]. Pete Bannon is also an industry giant and worked with Jim Keller at PA Semi and subsequently Apple on their A series chips. These two have decades of experience designing chips that went into tens of millions of devices. Tesla’s FSD computer is in hundreds of thousands of cars. They know what they’re doing.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jim_Keller_(engineer)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jim_Keller_(engineer)</a>
评论 #28341692 未加载
评论 #28340660 未加载
gautamcgoelover 3 years ago
Question: how many chips does Tesla need to buy in order to get a reasonable unit price per chip? Obviously &lt;10k is too small, but is 100k reasonable? 1M?
评论 #28340170 未加载
m3kw9over 3 years ago
Black box numbers would be better in terms of physical size, power usage and comparable training&#x2F;inference times. Everything else is hype.
soniumover 3 years ago
The whole point of the die on silicone seems to be that this maximizes the interface bandwidth and minimize latency between the dies. If this true the next step would be to bring the multi die modules as close as possible in three dimensions to ultimately build a borg-cube like structure in zero-g with a power source at its core.
mrtnmccover 3 years ago
I wonder how their neural network structures informed the hardware design, such as the dimensions of tensor products. Or is Dojo trying for as general purpose ML as possible? I imagine there is a tension between software and hardware teams where Karpathy&#x27;s team is always changing things while the hardware team wants specs&#x2F;reqs.<p>The &quot;tiles of tiles&quot; chip architecture seems like an Elon-obvious, let&#x27;s just scale what we have approach. Do their neural networks map to that multiscale tiling well?
jstandardover 3 years ago
Non-hardware person here. How does the D1 compare to Cerebras WSE-2 wafer chip with 2.6 trillion transistors?<p>The WSE2 is much larger obviously, but I would also think it can result in a large performance boost given everything is on a single chip.
throwaway4goodover 3 years ago
What process are these chips made with?<p>It says TSMC 7nm - is that DUV or EUVL?
评论 #28339468 未加载
评论 #28339290 未加载
jeffbeeover 3 years ago
Is that a lot?
评论 #28338846 未加载
评论 #28338544 未加载