NVIDIA continues to vertically integrate their datacenter offerings. They bought mellanox to get infiniband. They tried to buy ARM - that didn't work. But they're building & bundling CPUs anyway. I guess when you're so far ahead on the compute side, it's all the peripherals that hold you back, so they're putting together a complete solution.
This leads me to wonder about the microprocessor shortage.<p>So many computing devices such as Nvidia Jetson and Raspberry Pi are simply not available anywhere. I wonder what's he point of bringing out new products when existing products can't be purchased? Won't the new products also simply not be available?
This is interesting. So without actually targeting a specific Cloud / server market for their CPU, which often ends with a chicken and egg problem with HyperScaler making their own Design or Chip. Nvidia manage to enter the Server CPU market leveraging their GPU and AI workload.<p>All of a sudden there is real choice of ARM CPU on Server. ( What will happen to Ampere ? ) The LPDDR5X used here will also be the first to come with ECC. And they can cross sell those with Nvidia's ConnectX-7 SmartNICs.<p>Hopefully it will be price competitive.<p>Edit: Rather than downvoting may be explain why or what you disagree with ?
What are people's experience of developing with NVIDIA? I know what Linus thinks: <a href="https://www.youtube.com/watch?v=iYWzMvlj2RQ" rel="nofollow">https://www.youtube.com/watch?v=iYWzMvlj2RQ</a>
soooo... would something like this be a viable option for a non-mac desktop similar to the 'mac studio' ? def seems targeted at the cloud vendors and large labs... but it'd be great to have a box like that which could run linux.
Given how larger non-mobile chips are jumping to the LPDDR standard what is the point of having a separate DDR standard? Is there something about LPDDR5 that makes upgradable dimms not possible?
Anyone have a sense for how much these will cost? Is this more akin to the Mac Studio that costs 4k or an A100 gpu that costs upward of 30k? Looking for an order of magnitude.
Who bets that the amount of detailed information they'll officially[1] release about it is "none" or close to that? I still think of Torvalds' classic video whenever I hear about nVidia. The last thing the world needs is more proprietary crap that's probably destined to become un-reusable e-waste in less than a decade.<p>[1]<a href="https://news.ycombinator.com/item?id=30550028" rel="nofollow">https://news.ycombinator.com/item?id=30550028</a>
I think we're all missing the forest because all the cores are in the way:<p>The contention on that memory means that only segregated non-cooporative as in not "joint parallel on the same memory atomic" will scale on this hardware better than on a 4-core vanilla Xeon from 2018 per watt.<p>So you might aswell buy 20 Jetson Nanos and connect them over the network.<p>Let that sink in... NOTHING is improving at all... there is ZERO point to any hardware that CAN be released for eternity at this point.<p>Time to learn JavaSE and roll up those sleves... electricity prices are never coming down (in real terms) no matter how high the interest rate.<p>As for GPUs, I'm calling it now: nothing will dethrone the 1030 in Gflops/W in general and below 30W in particular; DDR4 or DDR5, doesn't matter.<p>Memory is the latency bottleneck since DDR3.<p>Please respect the comment on downvote principle. Otherwise you don't really exist; in a quantum physical way anyway.
"Grace?"<p>After 13 microarchitectures given the last names of historical figures, it's really weird to use someone's first name. Interesting that Anandtech and Wikipedia are both calling it Hopper. What on Earth are the marketing bros thinking?