> My first attempt uses the small alpine image, which already packages thttpd:<p><pre><code> # Install thttpd
RUN apk add thttpd
</code></pre>
Wouldn't you want to use the --no-cache option with apk, e.g.:<p><pre><code> RUN apk add --no-cache thttpd
</code></pre>
It seems to slightly help with the container size:<p><pre><code> REPOSITORY TAG IMAGE ID CREATED SIZE
thttpd-nocache latest 4a5a1877de5d 7 seconds ago 5.79MB
thttpd-regular latest 655febf218ff 41 seconds ago 7.78MB
</code></pre>
It's a bit like cleaning up after yourself with apt based container builds as well, for example (although this might not <i>always</i> be necessary):<p><pre><code> # Apache web server
RUN apt-get update && apt-get install -y apache2 libapache2-mod-security2 && apt-get clean && rm -rf /var/lib/apt/lists /var/cache/apt/archives
</code></pre>
But hey, that's an interesting goal to pursue! Even though personally i just gave up on Alpine and similar slim solutions and decided to just base all my containers on Ubuntu instead: <a href="https://blog.kronis.dev/articles/using-ubuntu-as-the-base-for-all-of-my-containers" rel="nofollow">https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...</a>
I <i>love</i> stuff like this.<p>People will remark about how this is a waste of time, others will say it is absolutely necessary, even more will laud it just for the fun of doing it. I'm in the middle camp. I wish software/systems engineers would spend more time optomising for size and performance.
While this is remarkably a good hack and I did learn quite a bit after reading the post, I'm simply curious about the motivation behind it? A docker image even if it's a few MBs with Caddy/NGINX should ideally be just pulled once on the host and sit there cached. Assuming this is OP's personal server and there's not much churn, this image could be in the cache forever until the new tag is pushed/pulled. So, from a "hack" perspective, I totally get it, but from a bit more pragmatic POV, I'm not quite sure.
I love it! Can you add SSL though? Does it support gzip compression? What about Brotli? I like that it's small and fast so in addition to serving static files can it act as a reverse proxy? What about configuration? I'd like to be able to server multiple folders instead of just one?<p>Where can I submit a feature request ticket?
If you use "-Os" instead of "-O2", you save 8kB!<p>However, Busybox also comes with an httpd... it may be 8.8x bigger, but you also get that entire assortment of apps to let you troubleshoot, run commands in an entrypoint, run commands from the httpd/cgi, etc. I wouldn't run it in production.... but it does work :)
Redbean is just 155Kb without the need for alpine or any other dependency. You just copy the Redbean binary and your static assets, no complicated build steps and hundred MB download necessary. Check it out: <a href="https://github.com/kissgyorgy/redbean-docker" rel="nofollow">https://github.com/kissgyorgy/redbean-docker</a>
For static websites, is there any reason not to host them on GitHub?<p>Since GitHub Pages lets you attach a custom domain, it seems like the perfect choice.<p>I would expect their CDN to be pretty awesome. And updating the website with a simple git push seems convenient.
The only thing I would change: I would use Caddy and not thttpd. This way the actual binary doing the serving is memory-safe. It may well require more disk space, but it is a worthwhile tradeoff I think. You can also serve over TLS this way.
How many requests can thttpd handle simultaneously, compared to, say nginx ? It's a moo point being small if you then have to instantiate multiple containers behind a load balancer to handle simultaneous requests.
Is it smaller than darkhttpd?<p><a href="https://unix4lyfe.org/darkhttpd/" rel="nofollow">https://unix4lyfe.org/darkhttpd/</a>
I used this as a base image for a static site, but then needed to return a custom status code, and decided to build a simple static file server with go. It's less than 30 lines, and image size is <5MB. Not as small as thttpd but more flexible.
Well, this will definitely serve an <i>unchanging</i> static website. But <i>unchanging</i> static websites are just archives. Most static websites have new .html and other files added on whim regularly.
I do something similar at work for internal only static docs.<p>The image is a small container with an http daemon. It gets deployed as a statefulset and I mount a volume into the pod to store the static pages (they don't get put into the image). Then I use cert-manager and an Istio ingress gateway to add TLS on top.<p>Updating the sites (yes, several at the same domain) is done via kubectl cp, which is not the most secure but good enough for our team. I could probably use an rsync or ssh daemon to lock it down further, but I have not tried that.
Seems pretty silly. That being said, I did the exact same thing a couple years ago for work. My first attempt was to use busybox's built-in httpd, but it didn't support restarts. I vaguely recall settling on the same alpine + thttpd solution. The files being served were large, so the alpine solution was good enough.
I assume the author would then publish this behind a reverse proxy that implements TLS? Seems like an unnecessary dependency, given that Docker is perfect for solving dependency issues.
Tbh the moment the author thought about hosting yourself anything to serve static pages -> it was already too much effort.<p>There are free ways to host static pages and extremely inexpensive ways to host static pages that are visited mullions of times per month using simply services built for that.
Is nothing sacred? The KuberDocker juggernaut leaves no stone unturned. Laughable given that Docker was originally designed for managing massive fleets of servers at FAANG-scale.
there are services specifically for static site hosting. I'd let them do the gritty devops work personally.<p>Netlify, Amplify, Cloudflare Pages, etc.