TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Debugging a TCP socket leak in a Kubernetes cluster

61 pointsby alberteinsteinabout 7 years ago

3 comments

mbrumlowabout 7 years ago
I have read this near four times now. I can&#x27;t really find any sustenance -- it appears to be a advertisement for a product wrapped in a unsolved issue.<p>No mention of lsof, netstat, or tcpdump, the normal tools used for troubleshooting these sort of problems. Without trying to sound to snarky I find it highly concerning that the industry is now working with tools like docker and Kubernties and we some how just throw out the fact that these sit on top of Linux.<p>Not to mention kubelet&#x27;s ability to spot one of many turntables reaching a max still would have not solved this problem -- &quot;Fundamentally, the node was unhealthy&quot; -- is not a proper answer to the problem -- what was done to resolve the memory issue is. That could be increasing the tcp_mem to to support the workload, or finding a faulty user space program who is acting faulty -- all of which we have no clue because no real tools for troubleshooting this were used.<p>I mainly write this gripe because this appears to be a problemtisement, or a blogtisement. A &quot;helpful&quot; but not informative blog simply to provide a way to advertise your companies service at as the final blurb, leaving us with no real solution, resolution or a closing to the mystery of why tcp_mem was higher than expected.
评论 #16861011 未加载
评论 #16861105 未加载
评论 #16861223 未加载
评论 #16860999 未加载
kroninabout 7 years ago
Seems that metrics providing visibility into the &quot;network connectivity was flaky&quot;, like looking at response times (particularly 95&#x2F;99 percentile) and digging into the pod, which gives you the node, would have isolated the problem pretty quickly to a single node. If a problem is isolated to a node, first thing to look at would be node logs. Would that pattern not have worked in this case?
Thaxllabout 7 years ago
Checking the logs should be on everyone mind when dealing with issues.
评论 #16861244 未加载