The unanswered question is, will it be cheaper?<p>Price / GB of DRAM hasn't actually fallen much in the 10 years of progression.[1] LPDDR is still over $3/GB. UDIMM is still ~$3 /GB, which is about the same in 2010 / 2011. i.e Despite what you may heard about DRAM price collapse in 2019, the price <i>floor</i> of DRAM has been pretty much the same over the past 10 years.<p>Every other silicon has gotten cheaper, NAND, ICs, just not DRAM. And yet our need for DRAM is forever increasing. From In-Memory Datastore on Servers to Mobile Phones with Camera shooting rapid 4K images.<p>Compared to NAND, or Foundry like TSMC, there are clear roadmaps where cost is heading, and what cost reduction we could expect in the next 5 years, along with other outlook. There is nothing of sort in DRAM. At least I dont see anything to suggest we could see $2/GB DRAM, if not even lower. I dont see how EUV is going help either, there won't even be enough EUV TwinScan machines going around for Foundries in the next 3 years, let alone NAND and DRAM.<p>The only good news is the low / normal capacity ECC DRAM has finally fallen to ~$5/GB. ( They used to be $10-20/ GB ).<p>[1] <a href="https://secureservercdn.net/166.62.107.55/ff6.d53.myftpupload.com/wp-content/uploads/2019/11/2019-11-20-DRAM-Spot-2012-19.jpg" rel="nofollow">https://secureservercdn.net/166.62.107.55/ff6.d53.myftpuploa...</a>
One thing I'd like to understand better about DDR5 is how well the built-in ECC is going to work to improve reliability. DDR5 comes with "chip level ECC" [1] of which the main purpose is to be able to better sell highly complicated memory chips with minor defects.<p>But as a consequence as I understand, it <i>will</i> allow for the correction of single bit memory flips. With regular DDR4 or previous generations, you don't get any error correction. Any bit error in your DDR4 modules has the potential to corrupt data. If you want to be protected from that, you will need to get ECC memory.<p>Unfortunately, anything with "ECC" in hardware for unfortunate reasons gets labeled with an "enterprise" sticker. And that means a certain price level, and a certain power consumption. (Yes I know you can get Ryzen boxes that work with ECC, but that's still PC sized hardware for hundreds of dollars).<p>If DDR5 can bring error correction to the masses - like in single board computers, 10W NAS boxes, smartphones - that would be pretty cool. But I'm not sure whether my reading of that is correct.<p>[1]: <a href="https://www.anandtech.com/comments/15912/ddr5-specification-released-setting-the-stage-for-ddr56400-and-beyond/708566" rel="nofollow">https://www.anandtech.com/comments/15912/ddr5-specification-...</a>
I like what IBM has done with their latest power chip - effectually made the whole memory interface upgradable.<p><a href="https://www.nextplatform.com/2020/09/03/the-memory-area-network-at-the-heart-of-ibms-power10/" rel="nofollow">https://www.nextplatform.com/2020/09/03/the-memory-area-netw...</a><p>"the shift from dedicated DDR4 memory controllers to Serdes-based, high speed differential signaling mixed with buffer chips on memory modules that can be taught to speak DDR4, DDR5, GDDR6, 3D XPoint, or whatever, is an important shift in system design and one that we think, ultimately, the entire industry will get behind eventually."
To my hardware colleagues on HN, what prevents something similar to Dennard Scaling on DRAMs?<p>My very naive textbook knowledge is that every bit for DRAM uses up a single transistor and a capacitor, whereas a SRAM cell uses up 6 transistors.<p>How is it then that with all the scaling so far that traditional SRAMs haven't caught up with DRAM capacities? A single DRAM chip is huge compared to the total die size of any micro-processor.<p>As the sibling comment asks about cheaper DRAMs, I'm trying to understand how using SRAMs haven't caught up yet from a price/GB perspective.
Will we get performance increases, and how big will they be in the average case, not for some specific codes with low cache hit ratio on large datasets, and attributed solely to bandwidth increases and not architecture IPC improvements?
Does anybody have a recommendation for a benchmarking suite that in app shows where your bottlenecks are?<p>eg as simple as CPU utilization, GPU utilization, RAM speed utilization?
When will this be available with AMD? I'm currently shopping for a new build, and willing to wait 6 months or so, but beyond that I'm not sure ...
Although this question is more academic in nature, how "difficult" is memory training/initialization compared to DDR4? I recall an active microcontroller needing to calibrate the DRAM on startup for DDD4.
I haven't been able to find any specs on latency, and whether it has improved or not. I assume it hasn't, because it doesn't tend to, but does anyone know for sure?
Hooray for FINALLY putting a local DC/DC converter ON THE DIMM so the motherboard can feed it with high-voltage/low-current power instead of low-voltage/high-current. The latter has become increasingly impractical (and noisy!)