It's CXL not PCIe. The latency is much more like NUMA hop or so with CXL, which makes this much more likely to be useful than trying to use host memory over PCIe.<p>CXL 3.1 was the first spec where they added any way to have a host CPU also be able to share memory (host to host), itself be part of RDMA. It seems like it's not exactly going to look like any other CXL memory device, so it'll take some effort to make other hosts or even the local host be able to take advantage of CXL. <a href="https://www.servethehome.com/cxl-3-1-specification-aims-for-big-topologies/" rel="nofollow">https://www.servethehome.com/cxl-3-1-specification-aims-for-...</a>
Good job decreasing latency.<p>Now work on the bandwidth.<p>A single HBM3 module has the bandwidth of half-a-dozen data center grade PCIe 5.0 x16 NVME drives.<p>A single DDR5 DIMM has the bandwidth of a pair of PCIe 5.0 x4 NVME drives.