Linux added a busy polling feature for high performance networking. Most Linux software does not use it, but software used in datacenters (e.g. by CDNs) that does use it makes the system very energy inefficient when things are not busy. This patch gives the the kernel the ability to turn that off when not busy to regain energy efficiency until things become busy again.<p>The article name is somewhat misleading, since it makes it sound like this would also apply to desktop workloads. The article says it is for datacenters and that is true, but it would have been better had the title ended with the words “in datacenters” to avoid confusion.
One thing I didn’t see mentioned here yet: a lot of high-performance data center workloads don’t actually go through the Linux kernel’s network stack at all.<p>Instead, they use DPDK, XDP, or userspace stacks like Onload or VMA—often with SmartNICs doing hardware offload. In those cases, this patch wouldn’t apply, since packet processing happens entirely outside the kernel.<p>That doesn’t mean the patch isn’t valuable—it clearly helps in setups where the kernel is in the datapath (e.g., CDNs, ingress nodes, VMs, embedded Linux systems). But it probably won’t move the needle for workloads that already bypass the kernel for performance or latency reasons. So the 30% power reduction headline is likely very context-dependent.
For a more detailed look at this change: <a href="https://lwn.net/Articles/1008399/" rel="nofollow">https://lwn.net/Articles/1008399/</a>
This is really cool. As a high-performance computing professional, I've often wondered how much energy is wasted due to inefficient code and how much that is a problem as planetary compute scales up.<p>For me, it feels like a moral imperative to make my code as efficient as possible, especially when a job will take months to run on hundreds of CPU.
Does this mean that "adaptive interrupt mitigation" is no longer a thing in the kernel? I haven't really messed with it in ~15+ years, but it used to be that the kernel would adapt, if network rate was low it would use interrupts, but then above a certain point it would switch to turning off interrupts and using polling instead.<p>The issue I was trying to resolve was sudden, dramatic changes in traffic. Think: a loop being introduced in the switching, and the associated packet storm. In that case, interrupts could start coming in so fast that the system couldn't get enough non-interrupted time to disable the interrupts, UNLESS you have more CPUs than busy networking interfaces. So my solution then was to make sure that the Linux routers had more cores than network interfaces.
The flip side of this is meta having a hack that keeps their GPUs busy so that the power draw is more stable during llm training (eg don't want a huge power drop when synchronizing batches)
Off topic: glad to read about Joe Damato again — such a blast from the past. I haven't read anything from him since I first read James Gollick posts about on tcmalloc and then learning about packagecloud.io which eventually led me to Joe's amazing posts.
Key paragraph:
"This energy savings comes with a caveat. “It is sort of a best case because the 30 percent applies to the network stack or communication part of it,” Karsten explains. “If an application primarily does that, then it will see 30 percent improvement. If the application does a lot of other things and only occasionally uses the network, then the 30 percent will shrink to a smaller value.”"
This brings back memories.<p><a href="https://didgets.substack.com/p/finding-and-fixing-a-billion-bug" rel="nofollow">https://didgets.substack.com/p/finding-and-fixing-a-billion-...</a>
When a machine is large enough with enough tenants, there should be enough ambient junk going on to poll on 1 or a few cores and still be optimized for power. This is the tradeoff described in the 2019 "SNAP" paper from Google, in the section about the compacting engine.<p>The "up to 30%" figure is operative when you have a near-idle application that's busy polling, which is already dumb. There are several ways to save energy in that case.
> When I grew up in computer science in the 90s, everybody was concerned about efficiency...“Somehow in the last 20 years, this has gotten lost. Everybody’s completely gung-ho about performance without any regard for efficiency or resource consumption. I think it’s time to pivot, where we can find ways to make things a little more efficient.<p>This is the sort of performance efficiencies I want to keep seeing on this site, from those who are distinguished experts and contributed to critical systems such as the Linux kernel.<p>Unfortunately, in the last 10-15 years we are seeing the worst technologies being paraded due to a cargo-cultish behaviour. From asking candidates to implement the most efficient solution to a problem in interviews but then also choosing the most extremely inefficient technologies to solve certain problems because so-called software shops are racing for that VC money. Money that goes to hundreds of k8s instances on many over-provisioned servers instead of a few.<p>Performance efficiency <i>critically</i> matters, and it is the difference between having enough runway for a sustainable business vs having none at all.<p>And nope. AI Agents / Vibe coders could not have come up with a more correct solution in the article.
I love how I can update Linux and expect it to get faster, not slower. Exception to be made for Nvidia drivers, but at least their performance is consistent.
Oh, as a guy which was using DPDK like technology that do busy-poll and bypass kernel to process network packets, I must say may be much power have be wasted...