TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Liquid cooling next-gen servers getting hands-on

55 pointsby vanburenalmost 4 years ago

16 comments

KaiserProalmost 4 years ago
&gt; Data center liquid cooling is going to happen<p>I mean yeah, but I doubt its going to be mainstream anytime soon.<p>At the moment its cheaper to just use forced air, and yolo it. Running at half density is expensive, but not as expensive as backfilling everything with liquid cooling.<p>Also given that we&#x27;ve been able to run 2 socket blades at full bore without liquid cooling kinda suggests its not actually needed.<p>Having radiators on the front&#x2F;back of the rack directly works really well. We had it on our render farm. combine that with enforced hot isle&#x2F;cold isle and you can reduce the need for aircon dramatically, without multiplying your leak risk by at least 96 times.<p>The massive problem here is that its really difficult to hotswap anything. Those cooling pipes need to be removed before you can pull out the server. Unless they are self sealing (like hydrolic lines) then you need to drain the loop first. That costs a shitload of money at scale.
评论 #28041717 未加载
评论 #28043592 未加载
ChuckMcMalmost 4 years ago
I cannot help but chuckle at how what is old is new again here. In the 70&#x27;s and early 80&#x27;s all of IBM&#x27;s mainframes supported liquid cooling. Basically when a computer &quot;uses&quot; X kW of power it really means that it generates that many kW of heat while it is operating. Removing heat at scale has been a thing for a long time.<p>And what was alluded to in the video is the thermal mass of &#x27;air&#x27; kind of sucks. So the old design of chilling air down to 67 degrees and filling a room with that air so that it can circulate around electronics putting out prodigious amounts of heat, and then collected and re-cooled, is not nearly as efficient as one would like.<p>Cooling water, piping it to the heat exchanger in the back door of the rack and then (unlike the video&#x27;s idea) sucking air through it first, and then pushing the cooled air over the electronics to &#x27;re-heat&#x27; it, works better. Then you don&#x27;t really care what temperature the air in the data center itself is as long as the heat exchanger can remove &#x27;x&#x27; watts of heat from it before it gets blown over the computers. Suck air in from the floor (the coolest air) and blow it out the top (where it continues on to the ceiling).<p>Still, that only doubles the power capacity of the racks (maybe 2.5x if the heat exchanger is filled with actively chilled water)<p>Prior to heat exchanger doors people would have &quot;cold&quot; aisles and &quot;hot&quot; aisles. The cold air from the CRAC units would come up from the floor behind the servers, get sucked through them and exhausted forward into the &quot;hot&quot; aisle. There is a whole little mini industry of &quot;cold air containment&quot; which has stuff to build doors&#x2F;covers for the cold aisle so that all of that air is sucked through servers.
devwastakenalmost 4 years ago
I&#x27;m curious why liquid cooling for computers still uses compression fittings and other odd methods. Compression fittings are widely out of favor due to their tendency to leak, and nowadays everything is copper, pex, or metal+flared fittings. I wonder if there&#x27;s a PC running PEX-A for tubing.<p>Further thoughts:<p>Brake lines use flared fittings, with either metal tubing or plastic tubing with metal ends. Uses a special bolt that allows liquid through the pipe to enter the caliper, and is removable.<p>I would imagine something like that could work to make the lines to the individual servers serviceable without relying on flaky plastic &quot;quick connects&quot;.
评论 #28038722 未加载
评论 #28038638 未加载
评论 #28039683 未加载
opwieurposiualmost 4 years ago
A lot of high power plasma processing equipment is water cooled. RF and DC generators, matches, etc. Stuff that requires a coax cable as thick as your arm. The switch from air to water cooling happens around .5-1Kw, this is the same range they seem to be targeting for datacenter stuff.<p>Aside from leaks, the building maintenance staff really needs to stay on top of the cooling water chemistry. Cooling loops tend to make slime and if you let it get out of control it clogs up everything and becomes a real problem to clean.
20100thibaultalmost 4 years ago
Liquid cooling also enables heat recovery and free cooling all year long. Here&#x27;s a project using liquid cooling to recover energy from Data centers to heat greenhouses. <a href="https:&#x2F;&#x2F;www.qscale.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.qscale.com&#x2F;</a>
jeffbeealmost 4 years ago
It seems odd to discuss liquid cooling but spend the first part of the article talking about AC PDUs and their issues. If you are able to use liquid cooling in a data center, aren’t you also sophisticated enough for modern DC to point-of-load?<p>I can see why supermicro needs to pursue this direction, but it seems inevitable that their business will get eaten up by OCP and integration at the rack, row, and building level.
jmcguckinalmost 4 years ago
One place i worked used flourinert to cool the circuit cards of a integrated circuit tester.<p>Each card has a &#x27;cold plate&#x27; where the outside of the cold plate has a brass tube brazed to the plate in a serpentine pattern leading to two quick disconnect connectors that automatically engage&#x2F;disengage when the card in inserted or removed from the card cage.<p>The other side of the cold plate (the &#x27;inside&#x27;) has spring loaded fingers to pull the heat away from chips. We had some chips that dissipated over 100W.<p>This worked great - you&#x27;d lose maybe 1 drop of fluid per card insertion, etc. We used flourinert, but DI water, mineral oil or water with ethylene glycol could be used as well.
Const-mealmost 4 years ago
Technically, the best solution is distribute. Specifically, compute stuff on user&#x27;s own devices, as opposed to moving everything to these centralized clouds controlled by just a few large corporations.<p>Too bad that&#x27;s unlikely to happen.
评论 #28044421 未加载
alberthalmost 4 years ago
What makes this time different?<p>I&#x27;ve literally heard this is coming to datacenter for 20 YEARS and it hasn&#x27;t yet.<p>Liquids like mineral oil, etc to be the cooling agent yet it never gains measured adoption.<p>And with so much of servers now being centrally managed by just a few major cloud providers (AWS, Azure, etc) - unless you can break into those few accounts - how will this time be any different than the past?<p>From 2003: <a href="https:&#x2F;&#x2F;www.hitachi.com&#x2F;New&#x2F;cnews&#x2F;E&#x2F;2003&#x2F;0217&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;www.hitachi.com&#x2F;New&#x2F;cnews&#x2F;E&#x2F;2003&#x2F;0217&#x2F;index.html</a>
评论 #28042170 未加载
titzeralmost 4 years ago
Cool stuff. I remember seeing liquid cooling solutions that immersed the entire system in some non-conductive fluid, like mineral oil. Of course, that&#x27;s a bit messy, but I wonder if all components were designed to be immersed, whether it&#x27;d be feasible to consider the entire server enclosure as the watertight unit, e.g. in a rack, rather than hoses to individual servers.
评论 #28039480 未加载
评论 #28040028 未加载
bob1029almost 4 years ago
Direct liquid cooling makes a shitload of sense from a purely academic standpoint.<p>At what point in power density would it not even matter if the datacenter was at meat locker temperatures? I feel like we are getting pretty damn close.<p>The efficiency gains at scale must be really good. I can see why this is not super popular though.
bastardoperatoralmost 4 years ago
Data centers have been using chilled water to cool air for a long time. It would be interesting to see cooling delivered directly to the CPU&#x2F;GPU from a&#x2F;the chiller.
评论 #28041386 未加载
cyberge99almost 4 years ago
I think datacenters will opt for more ARM architectures before introducing liquid at scale. Things like Gravitron and Apple Silicon are changing the landscape there.
评论 #28038693 未加载
评论 #28040051 未加载
评论 #28041659 未加载
评论 #28039051 未加载
Havocalmost 4 years ago
Is there a reason the ai accelerators etc can’t just be parked in the polar circle?<p>Not every workload is that latency sensitive presumably
zamalekalmost 4 years ago
&gt; need to handle cooling at &gt;1kW&#x2F;U<p>I wonder if this waste heat could be used to power active&#x2F;phase change cooling.
评论 #28040476 未加载
swayvilalmost 4 years ago
Why is that guy wearing a mask? Either servers are infectious or wearing a mask is now just &quot;expected&quot;.<p>Do any of you do a doubletake when you see people on tv without a mask?<p>Have you noticed characters in cartoons wearing masks? Check out the weather frog, sitting alone in the middle of a forest, wearing a mask.<p>This how we become Morlocks.