Seems to be a pretty decent overview; covers the usual suspects (multi-stage builds, FROM scratch, non-scratch minmal images, ldd to check libraries), with some nice bits that I'd not seen before (busybox:glibc). I would be curious to see how these base images stack up against Google's "distroless" base images (<a href="https://github.com/GoogleContainerTools/distroless" rel="nofollow">https://github.com/GoogleContainerTools/distroless</a>). I also appreciate that they call out Alpine's compatibility issues (on account of musl) but still leave it as a thing that can be good if you use it right. (Personally I'm quite fond of Alpine, but I don't bother when using binaries that expect glibc.)
While such articles are usually helpful, I'd caution that making individual image sizes as small as possible shouldn't really be your goal.<p>As a simple example - if you have 100 unique images in your system, having an image size of 1 GB each where 99% of it is derived from a common layer is going to be a lot smaller overall than "optimizing" the size down to 100 MB each but taking away the base layers.
Why is image size important? Should we instead optimise for speed of build?<p>If storage is cheap. And CPU costs Co2 does it make sense to spend longer time and more energy to save disk space?
It's worth noting that golang builds can be smaller than that with `GOOS=linux go build -ldflags="-s -w" .` (assuming a build on macos for linux.) From there I usually run `upx --ultra-brute -9 program` before dropping it into a `scratch` docker container (plus whatever other deps it needs).
This is indeed a good overview.<p>It would have been <i>a great</i> overview had it started with briefing readers about why (or when) image size should bother us at all.