So many misconceptions here about DRAM. DRAM is miraculously cheap. The process probably costs about $1.50 to $2 per GB, the rest is indeed profit. That nets them maybe $4,000 per wafer - and that includes all the testing, slicing, packaging etc. An average CPU chip in your laptop is about the same size as maybe 3 DRAM chips which cost around $20.<p>DRAM runs on a separate process which is dominated by the difficulty of building the capacitors. These are roughly the shape of a pencil (long narrow hexagons) where the central structure which holds the capacitor needs to be etched to perfection in a process that can take days. The transistors underneath are, at that scale, about as large as the chad from a paper hole punch. The capacitors are just about as narrow as material science (limit to voltage arcing through the insulation layers) can make them so there is glacially slow progress in shrinking DRAM further. Meanwhile the transistors are at extreme limits of resolution for liquid immersion processing, as also are the lines needed to join the rows and columns. Getting those perfect requires very specialized and competent processing.<p>They are not easy, second rate circuits. They are a completely separate branch of the silicon world. Unfortunately since they don't scale much any more, current design methods were mature 8 years ago, the only way you get more of them is to build new factories. That means it is a seller's market in a game where building another fab costs $10B and will only succeed if staffed by really expert people. So, it is generally profitable. The 3 vendors cannot easily undercut each other since they all have roughly the same limits, and any attempt to flood the market takes 4 years to build and everyone can see it coming.<p>So there you are. DRAM is the pivotal technology of the current computer era. Fixing that will most likely require breakthroughs in fundamental memory technology - or a reason for demand to collapse.
>Combined with die stacking, which allows for up to 8 dies to be stacked as a single chip, then a 40 element LRDIMM can reach an effective memory capacity of 2TB. Or for the more humble unbuffered DIMM, this would mean we’ll eventually see DIMM capacities reach 128GB for your typical dual rank configuration.<p>So on 8 Channel 16 DIMM per socket you could fit a theoretical 32<i>TB</i> of memory. This is insane amount of memory and great for In-Memory Database. ( How is Intel Optane going to compete? )<p>This makes me wonder, what makes DRAM so expensive? It is still hovering at a median price or around $3/GB compared to NAND which is less than $0.1/GB.
>All the while, there are several smaller changes [...], such as [...] on-die ECC<p>This means we don't have to worry about ECC support by CPU/motherboard anymore, right?
I wonder if this spec increases or decrease memory access latency. The article doesn’t say, which makes me suspicious.<p>After all, DDR4 has higher latency than DDR3 running at the same clock speed.
« The big change here is that the command and address bus is being shrunk and partitioned, with the pins being reallocated to the data bus for the second memory channel. Instead of a single 24-bit CA bus, DDR5 will have two 7-bit CA busses, one for each channel »<p>If there are two 32-bit data busses rather than one 64-bit bus, arithmetic suggests they shouldn't need to find extra pins from somewhere.<p>So maybe the rationale for shrinking the CA busses (to 7 rather than 12) is something different?
Wonder if this spec will make it easy for embedded systems to catch up. It always seems like they lag behind what's cutting edge. Maybe that's a cost/benefit analysis.
This may be a stupid point, but, for personal use of computers in their current form, how much memory do you really need? I’m still a little baffled why chrome requires GBs of memory...? Can we have lean software please?