TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Linux Network Performance Ultimate Guide

196 点作者 bratao10 个月前

7 条评论

c0l010 个月前
This would have been <i>such</i> a great resource for me just a few weeks ago!<p>We wanted to have finally encrypt the L2 links between our DCs and got quotes from a number of providers for hardware appliances, and I was like, &quot;no WAY this ought to cost that much!&#x27;, and went off to try to build something myself that hauled Ethernet frames over a wireguard overlay network at 10Gbps using COTS hardware. I did pull it off after a tenday of work or so, undercutting the cheapest offer by about 70% (and the most expensive one by about 95% or so...), but there was a <i>lot</i> of intricate reading and experimentation involved.<p>I am looking forward to validate my understanding against the content of this article - it looks very promising and comprehensive at first and second glance! Thanks for creating and posting it.
评论 #41085983 未加载
评论 #41085350 未加载
hyperman110 个月前
I wonder if it&#x27;s worth it, with this amount of tunables, to write software to tune them automatically, gradient decent wise: Choose parameter from a whitelist at random and slightly increase or decrease them, inside a permitted range. Measure performance for a while, then undo if things got worse, do some more if things got better.
评论 #41087765 未加载
dakiol10 个月前
I find this cool, but as a software engineer I rarely get the chance to run any of the commands mentioned in the article. The reason: our systems run in containers that are stripped down versions of some Linux, and I don’t have shell access to production systems (and usually reproducing a bug on a dev or qa environment is useless because they are very different from prod in terms of load and the like).<p>So the only chance of running any of the commands in the article are when playing around with my own systems. I guess they would be useful too if I were working as Platform engineer.
评论 #41086157 未加载
评论 #41085321 未加载
betaby10 个月前
&quot;net.core.wmem_max: the upper limit of the TCP send buffer size. Similar to net.core.rmem_max (but for transimission).&quot;<p>and then we have `net.ipv4.tcp_wmem` which bring two questions: 1. why there is no IPv6 equivalent and 2. what&#x27;s the difference from `net.core.wmem_max` ?
评论 #41086720 未加载
totallyunknown10 个月前
What&#x27;s missing a bit here is debugging and tuning for &gt;100 Gbps throughput. Serving HTTP at that scale often requires kTLS because the first bottleneck that appears is memory bandwidth. Tools like AMD μProf are very helpful for debugging this. eBPF-based continuous profiling is also helpful to understand exactly what&#x27;s happening in the kernel and user-space. But overall, a good read!
rjgonza10 个月前
This seems pretty cool, thanks for sharing. So far, at least in my career, whenever we need &quot;performance&quot; we start with kernel bypass.
hnaccountme10 个月前
Thank you