Given that there are essentially no architectural details here other than bandwidth estimates, and the release timeline is in 2023, how exactly does this count as "unveiling"? Headline should read: "NVidia working on new arm chip due in two years", or something else much more bland.
Based on Future ARM Neoverse, so basically nothing much to see here from CPU perspective, What really stands out, are those ridiculous number from its Memory system and Interconnect.<p>CPU: LPDDR5X with ECC Memory at 500+GB/s Memory Bandwidth. ( Something Apple may dip into. R.I.P for Mac with upgradable Memory )<p>GPU: HBM2e at <i>2000</i> GB/s. Yes, three zeros, this is not a typo.<p>NVLink: 500GB/s<p>This will surely further solidify CUDA dominance. Not entirely sure how Intel's XE with OneAPI and AMD's ROCm is going to compete.
The fact that they are using a Neoverse core licensed from ARM seems to imply that there won’t be another generation for NVidia’s Denver/Carmel microarchitectures. Somewhat of a shame, because those microarchitectures were unorthodox in some ways, and it would have been interesting to see where that line of evolution would have lead.<p>I believe this leaves Apple, ARM, Fujitsu, and Marvell as the only companies currently designing and selling cores that implement the ARM instruction set. That may drop to 3 in the next generation, since it’s not obvious that Marvell’s ThunderX3 cores are really seeing enough traction to be be worth the non-recurring engineering costs of a custom core. Are there any others?
It'd be interesting to know if NVidia are going for an ARMv9 core, in particular if they'll have a core with an SVE2 implementation.<p>It may be they don't want to detract from focus on the GPUs for vector computation so prefer a CPU without much vector muscle.<p>Also interesting that they're picking up an arm core rather than continuing with their own design. Something to do with the potential takeover (the merged company would only want to support so many micro-architectural lines)?
So is ARM the future at this point? After seeing how well Apple's M1 performed against a traditional AMD/Intel CPU, it has me wondering. I used to think that ARM was really only suited for smaller devices.
Tangent: Apple should bring back the Xserve with their M1 line, or alternately license the M1 core IP to another company to produce a differently-branded server-oriented chip. The performance of that thing is mind blowing and I don't see how this would compete with or harm their desktop and mobile business.
Looks like NVidia broke up with POWER on IBM and made their own chip.<p>They have interconnects from Mellanox, GPUs and their own CPUs now.<p>I suspect the supercomputing lists will be dominated by NVidia now.
I like the sound of a non-Apple arm chip for workstations. Given my positive experience with the M1 I'd be perfectly happy never using x86 again after this market niche is filled.
<i>Grace, in contrast, is a much safer project for NVIDIA; they’re merely licensing Arm cores rather than building their own ...</i><p>NVIDIA is buying ARM.
There's a lot of interconnects (CCIX, CXL, OpenCAPI, NVLink, GenZ) brewing. Nvidia going big is, hopefully, a move that will prompt some uptake from the other chip makers. 900GBps link, more than main memory: big numbers there.
Side note, I miss AMD being actively involved with interconnects. InfinityFabric seems core to everything they are doing, but back in the HyperTransport days it was something known, that folks could build products for, interoperate with. Not many did, but it's still frustrating seeing AMD keeping cards so much closer to the chest.
Real business-class features we want to know about:<p>Will they auto-detect workloads and cripple performance (like the mining stuff recently)? Only work through special drivers with extra licensing feeds depending on the name of the building it is in (data center vs office)?
I know we are going to hear from the Apple haters soon or those that don't like what apple is doing (modular upgradeable systems going away) BUT it seems like Apple is moving in a similar direction as nvidia.<p>Apple is also I think going to soldered on / close in RAM. Nvidia looks to be doing this two CPU / GPU / Ram all close together and it doesn't look like any upgrade options. Some thinking was that Apple was continuing to increase durability / reliability etc with their RAM move.<p>Does anyone know requirements for the LPDDR5X type of ram mentioned here. Does this require soldering things (you obviously get lots more control if you spec chips yourself and solder on)?
So is ARM the future at this point? After seeing how well Apple's M1 performed against a traditional AMD/Intel CPU, it has me wondering. I used to think that ARM was really only suited for smaller devices.
Honestly the bottom down-voted comment has it right. What AI application is actually driving demand here? What can't be accomplished now (or with reasonable expenditures) that can be accomplished by this one CPU that will be released in 2 yrs? What AI applications will need this 2 yrs from now that don't need it now?<p>I understand the here-and-now AI applications. But this is smelling more like Big AI Hype than Big AI need.
Is anyone but Apple making big investments in ARM for the desktop? This is another ARM for the datacenter design.<p>If other companies don't make genuine investments in ARM for the desktop there's a real chance that Apple will get a huge an difficult to assail application performance advantage as application developers begin to focus on making Mac apps first, and port to x86 as an afterthought.<p>Something similar happened back in the day when Intel was the de facto king, and everything on other platforms was a handicapped afterthought.<p>I wouldn't want to have my desktops be 15 to 30% slower than Macs running the same software, simply because of emulation or lack of local optimizations.<p>So I'm really looking forward to ARM competition on the desktop.
Super parallell arm chips could that not be a future thing for nvidia or another chip manufacturer. A normal CPU die with thousands of independent Cores.