TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Trillium TPU Is GA

178 pointsby gok5 months ago

9 comments

xnx5 months ago
&gt; We used Trillium TPUs to train the new Gemini 2.0,<p>Wow. I knew custom Google silicon was used for inference, but I didn&#x27;t realize it was used for training too. Does this mean Google is free of dependence on Nvidia GPUs? That would be a huge advantage over AI competitors.
评论 #42391516 未加载
评论 #42390871 未加载
评论 #42396476 未加载
评论 #42391888 未加载
评论 #42392672 未加载
评论 #42392556 未加载
评论 #42390645 未加载
lanthissa5 months ago
Okay I really dont understand this, nvidia has a 3.4T market cap google has a 2.4T post run up, and its PE is like 38 vs 25 so its a higher multiple on the business too. It appears making the best AI chip is a better business than googles entire conglomerate.<p>If TPU&#x27;s are really that good why on earth would google not sell them. People say its better to rent, but how can that be true when you look at the value of nvidia.
评论 #42398848 未加载
评论 #42400142 未加载
评论 #42404388 未加载
评论 #42399734 未加载
blackeyeblitzar5 months ago
So Google has Trillium, Amazon has Trainium, Apple is working on a custom chip with Broadcom, etc. Nvidia’s moat doesn’t seem that big.<p>Plus big tech companies have the data and customers and will probably be the only surviving big AI training companies. I doubt startups can survive this game - they can’t afford the chips, can’t build their own, don’t have existing products to leech data off of, and don’t have control over distribution channels like OS or app stores
评论 #42392161 未加载
评论 #42392507 未加载
评论 #42393070 未加载
评论 #42400456 未加载
randomcatuser5 months ago
How good is Trillium&#x2F;TPU compared to Nvidia? It seems the stats are: tpu v6e achieves 900 TFLOPS per chip (fp16) while Nvidia H100 achieves 1800 TFLOPS per gpu? (fp16)?<p>Would be neat if anyone has benchmarks!!
评论 #42390749 未加载
WanderPanda5 months ago
Crazy conglomerate discount on Alphabet if you can see TPUs as the only Nvidia competitor for training. Breaking up Alphabet seems more profitable than ever
评论 #42396162 未加载
teleforce5 months ago
It&#x27;s beyond me why processor with dataflow architecture is not being used for ML&#x2F;AI workloads, not even in minority [1]. Native dataflow processor will hands down beats Von Neumann based architecture in term of performance and efficiency for ML&#x2F;AI workloads, and GPU will be left redundant for graphics processing instead of being the default co-processor or accelerator for ML&#x2F;AI [2].<p>[1] Dataflow architecture:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Dataflow_architecture" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Dataflow_architecture</a><p>[2] The GPU is not always faster:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42388009">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42388009</a>
评论 #42407405 未加载
评论 #42406537 未加载
LittleTimothy5 months ago
The question I&#x27;d like to know the answer to is &quot;What was the total cost of training Gemini 2.0 and how does it compare to the total cost to train equivalent capability models on Nvidia GPUs?&quot;. I&#x27;d be fascinated to know, and there must be someone at Google who has the data to actually answer that question. I suspect it&#x27;s politically savvy for everyone at Google to pretend that question doesn&#x27;t exist or can&#x27;t be answered (because it would be an existential threat to the huge TPU project), but it would be absolutely fascinating. In the same way that Amazon eventually had to answer the &quot;Soo.... how much money is this Alexa division actually making&quot; question.
amelius5 months ago
Are the Gemini models open?
评论 #42392656 未加载
评论 #42392298 未加载
Hilift5 months ago
&quot;we constantly strive to enhance the performance and efficiency of our Mamba and Jamba language models.&quot;<p>... &quot;The growing importance of multi-step reasoning at inference time necessitates accelerators that can efficiently handle the increased computational demands.&quot;<p>Unlike others, my main concern with AI is any savings we got from converting petroleum generating plants to wind&#x2F;solar, it was blasted away by AI power consumption months or even years ago. Maybe Microsoft is on to something with the TMI revival.
评论 #42393004 未加载
评论 #42392875 未加载