TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

MTIA v1: Meta’s first-generation AI inference accelerator

110 点作者 thinxer大约 2 年前

13 条评论

bhouston大约 2 年前
Comparing MTIA v1 vs Google Cloud TPU v4:<p>MTIA v1&#x27;s specs: The accelerator is fabricated in TSMC 7nm process and runs at 800 MHz, providing 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision. It has a thermal design power (TDP) of 25 W. Up to 128 GB of ram LPDDR5.<p>Googles Cloud TPU v4: 275 teraflops (bf16 or int8), 90&#x2F;170&#x2F;192 W. 32 GiB of HBM2 RAM, 1200 GBps. From here: <a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;tpu&#x2F;docs&#x2F;system-architecture-tpu-vm#tpu_v4" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;tpu&#x2F;docs&#x2F;system-architecture-tpu-vm...</a><p>So it seems that the Google Cloud TPU v4 has an advantage in terms of compute per chip and ram speed, but the Meta one is much more efficient (2x to 4x, it is hard to tell) and has more ram but it is slower ram?
评论 #36001917 未加载
评论 #36001799 未加载
htrp大约 2 年前
This looks like a customized ASIC specializing solely in recommendation systems possibly focused on ads ranking<p>&gt;We found that GPUs were not always optimal for running Meta’s specific recommendation workloads at the levels of efficiency required at our scale. Our solution to this challenge was to design a family of recommendation-specific Meta Training and Inference Accelerator (MTIA) ASICs.
评论 #36003928 未加载
seydor大约 2 年前
It&#x27;s curious why nobody is selling these systems yet
评论 #36001718 未加载
评论 #36002148 未加载
评论 #36004168 未加载
sebzim4500大约 2 年前
Why does the headline just mention inference when the acronym also mentions training?<p>Is it primarily for inference and the training is just an after thought?
评论 #36001454 未加载
评论 #36001430 未加载
gmm1990大约 2 年前
They designed it in 2020 does that mean it is likely to have been in use for a while or is the design lag a few years?
评论 #36002947 未加载
rektide大约 2 年前
Can OpenXLA&#x2F;IREE target it? Supposedly PyTorch 2.0&#x27;s big shift was a switch to these new systems. Curiosity to know if that&#x27;s actually happened here.<p>Side note, the chip says Korea on it &amp; I this expected it was Samsung... But it&#x27;s TSMC made chips? What&#x27;s up with that?
评论 #36005028 未加载
ramshanker大约 2 年前
&gt;&gt;&gt;&gt; fabricated in TSMC 7nm process and runs at 800 MHz, providing 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision. It has a thermal design power (TDP) of 25 W.<p>So 2 generation of immediate improvement available.
评论 #36001348 未加载
评论 #36001255 未加载
notfried大约 2 年前
Has there been any rumors or statements from Facebook on them eventually stepping into selling cloud compute? I&#x27;d be surprised if they are investing in building hardware accelerators just for their own services.
评论 #36001540 未加载
评论 #36003084 未加载
评论 #36001627 未加载
two_in_one大约 2 年前
I want one. This thing can run LLaMA 64b int8 easily.<p>Meta is going to use it in datacenters, Much more efficient than NVidia generic GPUs. They are serious about putting AI everywhere.
brooksbp大约 2 年前
Why are there so many Mini SMP (?) connectors on the board? (video time 1:21)
villgax大约 2 年前
Just missed FP8 implementation on hardware
tartavull大约 2 年前
How do they compare to TPUs?
0zemp2c大约 2 年前
Just as incredible is the corresponding announcement of their RSC which is purportedly one of the world&#x27;s most powerful clusters<p>Amazing times! Private companies now have compute resources previously only showing up in government labs, and in many cases using novel components like MTIA<p>This feels like the start of a golden age and in a few years we will have incredible results and breakthroughs