TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Quantifying the performance of the TPU, our first machine learning chip

110 pointsby fhoffaabout 8 years ago

7 comments

joe_the_userabout 8 years ago
So since "sharing the benefits with everyone" could involve just allowing people to rent time on the Google cloud, we can still ask when/if the chips themselves will ever be available for purchase?
iandanforthabout 8 years ago
&quot;This first generation of TPUs targeted inference ...&quot;<p>Makes me wonder if there are more recent generations that target training.
wyldfireabout 8 years ago
From the paper:<p>&gt; if the TPU were revised to have the same memory system as the K80 GPU, it would be about 30X - 50X faster than the GPU and CPU.<p>Is it &quot;hard&quot; to interface with GDDR5&#x2F;HBM? Layout challenges? Or do they need the capacity more than the speed? Why <i>wouldn&#x27;t</i> they have used faster memory than DDR3?
评论 #14043841 未加载
评论 #14044108 未加载
评论 #14043821 未加载
dicroceabout 8 years ago
Is this device optimized for forward passes or backward passes or both?<p>It seems to me that Google engineers could use Tesla&#x27;s or other high end GPU&#x27;s for training and development, but then deploy those models on hardware optimized for forward passes...
评论 #14043737 未加载
pc2g4dabout 8 years ago
Maybe it&#x27;s just me misunderstanding, but to me &quot;inference&quot; and &quot;training&quot; are one and the same. But the article defined it thus:<p>This first generation of TPUs targeted inference (the use of an already trained model, as opposed to the training phase of a model, which has somewhat different characteristics)<p>This Nvidia article treats them differently, too: <a href="https:&#x2F;&#x2F;blogs.nvidia.com&#x2F;blog&#x2F;2016&#x2F;08&#x2F;22&#x2F;difference-deep-learning-training-inference-ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blogs.nvidia.com&#x2F;blog&#x2F;2016&#x2F;08&#x2F;22&#x2F;difference-deep-lea...</a><p>But the definition of &quot;statistical inference&quot; on Wikipedia says &quot;Statistical inference is the process of deducing properties of an underlying distribution by analysis of data&quot; which seems exactly like training.
评论 #14044131 未加载
评论 #14044169 未加载
评论 #14044197 未加载
评论 #14044199 未加载
wangqufeiabout 8 years ago
This is a very very bad idea. The so called AI is changing, far from being stable. Software can change, hardware can not.
bsamuelsabout 8 years ago
so basically they&#x27;re ASICs?<p>Would love some tech details, but it seems that the paper wont be published until 5pm today
评论 #14043613 未加载
评论 #14043763 未加载
评论 #14043764 未加载