TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Kubetail – Web-based real-time log viewer for Kubernetes

70 pointsby andresover 1 year ago
Hi Everyone!<p>Kubetail is a new project I&#x27;ve been working on. It&#x27;s a private, real-time log viewer for Kubernetes clusters. You deploy it inside your cluster and access it via a web browser, like the Kubernetes Dashboard.<p>Using kubetail, you can view logs in real-time from multiple Workload containers simultaneously. For example, you can view all the logs from the Pod containers running in a Deployment and the UI will update automatically as the pods come into and out of existence. Kubetail uses your in-cluster Kubernetes API so your logs are always in your possession and it&#x27;s private by default.<p>Currently you can filter logs based on node properties such as availability zone, CPU architecture or node ID and we have plans for a lot more features coming up.<p>Here&#x27;s a live demo: <a href="https:&#x2F;&#x2F;www.kubetail.com&#x2F;demo" rel="nofollow">https:&#x2F;&#x2F;www.kubetail.com&#x2F;demo</a><p>Check it out and let me know what you think!<p>Andres

12 comments

akhenakhover 1 year ago
There is an existing project named kubetail, which is quite popular 3.2K starts <a href="https:&#x2F;&#x2F;github.com&#x2F;johanhaleby&#x2F;kubetail">https:&#x2F;&#x2F;github.com&#x2F;johanhaleby&#x2F;kubetail</a>
评论 #39362090 未加载
swozeyover 1 year ago
This is really nice. I usually use stern for logs ON a cluster but to aggregate all of the logs we usually use something like fluentd to elasticsearch, which when seeing something like this is super complex for what we usually use it for but it does of course let you search everything.
评论 #39361990 未加载
评论 #39363204 未加载
remramover 1 year ago
I don&#x27;t know, at that point I think I&#x27;d rather use an actual log aggregation system like Loki or Kibana. That way I can search, including on pods that are now gone.<p>The niche between &quot;easy to get but single-container and no search&quot; on one side and &quot;install with helm but search all containers including historical with full-text and metrics&quot; on the other... seems like a tiny niche to me.<p>edit: oh you need to install Kubetail cluster-wide too. At least no DaemonSet I guess.
评论 #39368814 未加载
评论 #39371808 未加载
piterrroover 1 year ago
Congrats on the launch, nice project! I recently launched <a href="https:&#x2F;&#x2F;logdy.dev" rel="nofollow">https:&#x2F;&#x2F;logdy.dev</a> (OSS: <a href="https:&#x2F;&#x2F;github.com&#x2F;logdyhq&#x2F;logdy-core">https:&#x2F;&#x2F;github.com&#x2F;logdyhq&#x2F;logdy-core</a>) which attempts to address the problem but in a more wide space: any kind of process stdout -&gt; web UI. You can run it with k8s (kubectl logs -f). I&#x27;m actually writing a blog post about it as we speak and will definitely mention kubetail as well. Ofc, your project addresses the problem more specifically, I just thought to mention Logdy in case somebody is looking for a swiss-knife solution for all kinds of logs.
评论 #39364362 未加载
smockover 1 year ago
How is privacy enforced? Are you planning on maintaining this:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;kubetail-org&#x2F;kubetail">https:&#x2F;&#x2F;github.com&#x2F;kubetail-org&#x2F;kubetail</a><p>as an open source repo?
评论 #39361681 未加载
flashgordonover 1 year ago
Wow what a small world. I&#x27;ve been looking to build a tool EXACTLY like this (even after seeing Johan&#x27;s kubetail project) but kept thinking it was too obvious that somebody must have already built it or kXs ecosystem already has something like this and I was just too noob to find it! Only diff - my fe was going to be htmx :). Kudos.<p>I suppose it is never too late :)
nodesocketover 1 year ago
Awesome project. I run a Kubernetes cluster on my homelab on 4x Raspberry Pi 4 Bs. Gonna set this up tonight.<p>I believe there is no persistence, or does it cache in local storage or anything on the client? Would be awesome to have that option for client side storage for perhaps 24 hours.
评论 #39364792 未加载
hobofanover 1 year ago
Demo worked fine until I added the kubetail-demo pod as a source, which crashed my browser tab. Copying the same URL into a new tab loaded the page but stuck in &quot;Loading logs&quot; until it also crashed the tab.
评论 #39363075 未加载
smcleodover 1 year ago
Nice, needs a screenshot in the readme.
评论 #39367371 未加载
distracteddev90over 1 year ago
Does this work well when set up to use my local machine and my personal credentials?
评论 #39362113 未加载
cryptonectorover 1 year ago
I&#x27;ve written a [proprietary, though I have permission to open source it] `tailfhttpd`, which is a tiny, trivial HTTP&#x2F;1.1 server that supports only `HEAD`s and `GET`s of regular files, but with a twist:<p><pre><code> - it supports `ETag`s, with ETags derived from a file&#x27;s st_dev, st_ino, and inode generation number - it supports setting *some* response headers via xattrs (e.g., ETag, Content-Type, Vary, Cache-Control, etc.) - it supports conditional requests (i.e., `If-Match:`, `If-None-Match:`, `If-Modified-Since:`) - it supports `Range:` requests - for `Range: bytes=${offset}-` `GET`s the response does not finish (i.e., final chunk is not sent) until one of - the file is unlinked - the file is renamed - the server is terminated using inotify to find out about file unlinks&#x2F;renames </code></pre> It does this using `epoll`, `inotify`, and `sendfile()`, with multiple fully-evented, async-I&#x2F;O-using processes, each process being single-threaded. It is written in C in continuation passing style (CPS) with tiny continuations, so its memory footprint per client is also tiny. As a result it is blazingly fast, though it needs to be fronted with a reverse proxy for HTTPS (e.g., Nginx, Envoy, ...), sadly, but maybe I could teach it to use kssl.<p>I use it for tailing logs remotely, naturally, and as a poor man&#x27;s Kafka. Between regular file byte offsets, ETags, and conditional requests one can build a reliable event publication system with this `tailfhttpd`. For example, and event stream can name the next instance ({local-part, ETag}) then be renamed out of the way to end in-progress `GET`s, and clients can resume from the new file.<p>With a few changes it could &quot;tail&quot; (watch) directories, and even allow `POST`ing events (which could be done by writing to a pipe the reader of which routes events to files that get served by `tailfhttpd`).<p>Because `tailfhttpd` just serves files, and because of the ETag thing, conditional requests, and xattrs, it&#x27;s very easy to build more complex systems on top of it -- even shell scripts will suffice.<p>This chunked-encoding, &quot;hanging-GET&quot; thing is so unreasonably effective and cheap that I&#x27;m surprised how few systems support it.<p>I&#x27;ve visions of rewriting it in Rust and supporting H2 and especially H3&#x2F;QUIC to reduce the per-client load (think of TCP TCBs and buffers) even more, and using io_uring instead of epoll for even better performance.<p>Oh, and this approach is fully standards-compliant. It&#x27;s just a chunked-encoding, indefinite-end (&quot;hanging&quot;) GET with all the relevant (but optional) behaviors (ETags, conditional requests, range requests, even the right end of the byte-range being left unspecified is within spec!).
arczaover 1 year ago
Nice job