TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity

53 pointsby ohmyblock11 months ago

4 comments

jauntywundrkind11 months ago
It&#x27;s CXL not PCIe. The latency is much more like NUMA hop or so with CXL, which makes this much more likely to be useful than trying to use host memory over PCIe.<p>CXL 3.1 was the first spec where they added any way to have a host CPU also be able to share memory (host to host), itself be part of RDMA. It seems like it&#x27;s not exactly going to look like any other CXL memory device, so it&#x27;ll take some effort to make other hosts or even the local host be able to take advantage of CXL. <a href="https:&#x2F;&#x2F;www.servethehome.com&#x2F;cxl-3-1-specification-aims-for-big-topologies&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.servethehome.com&#x2F;cxl-3-1-specification-aims-for-...</a>
RecycledEle11 months ago
Good job decreasing latency.<p>Now work on the bandwidth.<p>A single HBM3 module has the bandwidth of half-a-dozen data center grade PCIe 5.0 x16 NVME drives.<p>A single DDR5 DIMM has the bandwidth of a pair of PCIe 5.0 x4 NVME drives.
评论 #40867687 未加载
karmakaze11 months ago
Perhaps this would be a good application for 3D XPoint memory that was seemingly discontinued due to lack of a compelling use case.
评论 #40861004 未加载
评论 #40896079 未加载
评论 #40861305 未加载
p1esk11 months ago
Using CPU memory to extend GPU memory seems like a more straightforward approach. Does this method provide any benefits over it?
评论 #40857682 未加载
评论 #40857449 未加载