TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

First true exascale supercomputer?

62 点作者 AliCollins将近 3 年前

15 条评论

jpsamaroo将近 3 年前
This is exciting news! What&#x27;s also exciting is that it&#x27;s not just C++ that can run on this supercomputer; there is also good (currently unofficial) support for programming those GPUs from Julia, via the AMDGPU.jl library (note: I am the author&#x2F;maintainer of this library). Some of our users have been able to run AMDGPU.jl&#x27;s testsuite on the Crusher test system (which is an attached testing system with the same hardware configuration as Frontier), as well as their own domain-specific programs that use AMDGPU.jl.<p>What&#x27;s nice about programming GPUs in Julia is that you can write code once and execute it on multiple kinds of GPUs, with excellent performance. The KernelAbstractions.jl library makes this possible for compute kernels by acting as a frontend to AMDGPU.jl, CUDA.jl, and soon Metal.jl and oneAPI.jl, allowing a single piece of code to be portable to AMD, NVIDIA, Intel, and Apple GPUs, and also CPUs. Similarly, the GPUArrays.jl library allows the same behavior for idiomatic array operations, and will automatically dispatch calls to BLAS, FFT, RNG, linear solver, and DNN vendor-provided libraries when appropriate.<p>I&#x27;m personally looking forward to helping researchers get their Julia code up and running on Frontier so that we can push scientific computing to the max!<p>Library link: &lt;<a href="https:&#x2F;&#x2F;github.com&#x2F;JuliaGPU&#x2F;AMDGPU.jl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;JuliaGPU&#x2F;AMDGPU.jl</a>&gt;
评论 #32008546 未加载
inasio将近 3 年前
There&#x27;s a bit of drama in that there are unofficial reports of two systems in China with higher performance [0], the arXiv paper listed below talks about a 40 million core system with around double theoretical performance than Frontier, and there&#x27;s apparently a second system online with similar performance. I personally suspect that they didn&#x27;t submit benchmarks to the top500 simply because those don&#x27;t run well enough in the systems<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2204.07816.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2204.07816.pdf</a>
评论 #32003619 未加载
评论 #32007693 未加载
seiferteric将近 3 年前
&gt; and relies on gigabit ethernet for data transfer.<p>This seems suprising to me, I would have expected 10Gb at least, if not something like inifiniband.
评论 #32003330 未加载
评论 #32003369 未加载
评论 #32003311 未加载
rektide将近 3 年前
&gt; <i>8,730,112 total cores</i><p>This must include the GPUs, otherwise it&#x27;d be 136,408 sockets. For a 42U rack with 4P 1U servers (not that that&#x27;s what&#x27;s in use, but to give an understandable napkin figure), that&#x27;d be 812 racks.<p>Frontier&#x27;s own page says 74 &quot;cabinets&quot;&#x2F;racks, and this is just for the compute (and perhaps switching and&#x2F;or power? storage is elsewhere). Made up of 9408 nodes, with 4 MI250X gpu accelerators each- those accelerators being dual chip + 8x HBMe2 memory a piece monsters. From Anandtech[1], we can see the liquid-cooled half-width sleds are dual socket, and packed packed packed.<p>[1] <a href="https:&#x2F;&#x2F;www.anandtech.com&#x2F;show&#x2F;17074&#x2F;amds-instinct-mi250x-ready-for-deployment-at-supercomputing" rel="nofollow">https:&#x2F;&#x2F;www.anandtech.com&#x2F;show&#x2F;17074&#x2F;amds-instinct-mi250x-re...</a>
评论 #32005710 未加载
jmpman将近 3 年前
Back in the 2010 timeframe, there were articles about how an Exascale Supercomputers might be impossible. Would be interesting if someone could go back and assess where those predictions were wrong and where they held, and how the architecture changed to get around those true scaling limits.
评论 #32003280 未加载
评论 #32009290 未加载
CoastalCoder将近 3 年前
I used to be really excited about supercomputers. It&#x27;s part of why I pursued HPC-related work.<p>But I think that having no interest in their actual applications has curbed my enthusiasm. I wish I could make a good living in something that interested more.
评论 #32004148 未加载
评论 #32005675 未加载
jakear将近 3 年前
The spec sheet mentions they&#x27;re moving from CUDA powering their prior supercomputer to &quot;HIP&quot; for this one. This is the first I&#x27;ve heard of HIP, does anyone have experience with it? My impression was that GPU programming tended to mean CUDA, which isn&#x27;t cross platform (as opposed to HIP).<p><a href="https:&#x2F;&#x2F;developer.amd.com&#x2F;resources&#x2F;rocm-learning-center&#x2F;fundamentals-of-hip-programming&#x2F;#:~:text=The%20Heterogeneous%20Interface%20for%20Portability%20%28HIP%29%20is%20AMD%E2%80%99s,developers%20to%20create%20portable%20applications%20on%20different%20platforms" rel="nofollow">https:&#x2F;&#x2F;developer.amd.com&#x2F;resources&#x2F;rocm-learning-center&#x2F;fun...</a>.
评论 #32003631 未加载
评论 #32003683 未加载
marcodiego将近 3 年前
I remember in early 2000&#x27;s trying to convince people to use linux and being mocked that it was a &quot;toy&quot; or &quot;not professional enough&quot;. While at the time I tried to argue how it was more stable, more secure and better performant than competition and even arguing that it was improving continuously, some people still made fun of me. It is a good thing I&#x27;ve been able, for a long time, to see this: <a href="https:&#x2F;&#x2F;www.top500.org&#x2F;statistics&#x2F;list&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.top500.org&#x2F;statistics&#x2F;list&#x2F;</a> , chose Category:&quot;Operating system family&quot; and click &quot;Submit&quot;.
评论 #32006590 未加载
评论 #32009477 未加载
einpoklum将近 3 年前
&gt; &quot;This HPE Cray EX system is the first US system with a peak performance exceeding one ExaFlop&#x2F;s.&quot;<p>So, it&#x27;s not actually the first one? And another one already exists outside the US?
评论 #32003517 未加载
评论 #32005708 未加载
评论 #32003378 未加载
sriram_malhar将近 3 年前
21 MW power! Insane.<p>Interestingly, the second one is 30 MW.
评论 #32003878 未加载
评论 #32005591 未加载
评论 #32004686 未加载
peter303将近 3 年前
Onward to zettaflops around 2037, assuming an order of magnitude every five years. Thats been pretty much the case for 60 years.
kvetching将近 3 年前
If they truly wanted to solve world problems, they need to allow an AGI company like DeepMind or OpenAI to use it. The people now using it are likely wasting so much money using outdated technologies.
评论 #32005792 未加载
causi将近 3 年前
It feels like it&#x27;s been a long time since supercomputers were interesting. They&#x27;re just oodles of identical processors connected together like legos. &quot;We can afford more bricks than the next guy&quot; is not exciting. When was the last time we had a &quot;fastest supercomputer&quot; that could do something the second-fastest couldn&#x27;t also do?
评论 #32003373 未加载
评论 #32003775 未加载
评论 #32003978 未加载
评论 #32007882 未加载
评论 #32004559 未加载
评论 #32005606 未加载
nabla9将近 3 年前
For comparison, 2000 SP Power3 375 MHz in Oak Ridge National Laboratory did the same order of magnitude GFlops as iPhones with A14 chip can do.
linsomniac将近 3 年前
TL;DR: Wow! ~9 million cores, 21 megawatts, &gt;2x the performance of #2 but pulling less power (compared to 30MW). #3 is 0.15EFLOPS, but also 3MW.