TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Google breaks AI performance records in MLPerf using TPUv4

9 pointsby asparaguialmost 5 years ago

1 comment

cinntailealmost 5 years ago
Under figure 1 it says &quot;Comparisons are normalized by overall training time regardless of system size, which ranges from 8 to 4096 chips. Taller bars are better.&quot;<p>Does this really make sense? The new TPU should have lots of chips and therefore finish training faster, which would make comparing like this kind of pointless? Am I misunderstanding something here?