TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Uncovering performance regressions in the TCP SACKs vulnerability fixes

32 pointsby rxinover 5 years ago

1 comment

djhworldover 5 years ago
This is a fun write up to read, because I experienced exactly the same problem in a similar system dealing with lots of writes to S3. Sporadic 15 minute timeouts for no immediate reason, especially as the files were only a few megabytes in size.<p>It led to me going on a similar journey of diving deep into the stack right into doing diffs on the kernel tree to work out what had changed between kernel versions. Eventually I came to the same conclusion, and only recently the problem has been patched in CENTOS6&#x2F;RHEL6 <a href="https:&#x2F;&#x2F;access.redhat.com&#x2F;errata&#x2F;RHSA-2019:2736" rel="nofollow">https:&#x2F;&#x2F;access.redhat.com&#x2F;errata&#x2F;RHSA-2019:2736</a><p>Interestingly after identifying this problem, I also noticed similar behaviour on AWS Lambda shortly after June 20th, with the TCPWQueueTooBig metric spiking and causing Lambda timeouts. Took a few rounds through AWS support (and our account managers) to get them to look at it, but they eventually fixed it.<p>I think the common thread between this post and my experience is we are both using a Java&#x2F;JVM based stack. When trying to reproduce the bug for Amazon I could only reproduce it with a simple Java example, whereas my attempt with Golang seemed to run fine - so not really sure why that was.<p>Maybe I&#x27;ll write a similar blog about my findings, at least I learnt a lot from it!
评论 #21023556 未加载
评论 #21022781 未加载