Heh, I thought "wait a minute didn't IBM do this 20 years ago and it was a total flop?" and yes, and its one of the same people involved.<p>Cryogenic computing wasn't a bust because the technology didn't work. It does. It was a bust because silicon <i>really</i> hates to transition between cryogenic temperatures and room temperature. If you transition it quickly (say you pull a card out of the liquid nitrogen and start working on it) it will crack as it warms up unevenly. As a result you needed anywhere from 20 to 48 hours to get a card from 'cryo' temp to 'room' temp, and while you could cool a bit faster it still took longer than just dumping it in LN2. So repairs and maintenance were multi-day affairs. Compare that to a modern AWS, Google, or Azure data center where a system fails, a tech can skate out to it with a new motherboard, pull the old one, put the new one in, and poof you're back on line in under 30 minutes.<p>As a result cryo computers either had to have failure rates that were so low that a repair that required transitioning the hardware through a cold/warm/cold cycle rarely happened, or you had to have enough extra hardware to support your base load while part of it was slowly going through the cold/warm/cold cycle.<p>I don't know how much IBM spent, but it was a <i>lot</i> and they never cracked that nut.