I use Hugo as my static site generator, single binary, no dependencies, generating hundreds of pages in milliseconds ... so reading this feels so wrong, I want to call it JavaScript masochism.<p>Taking a simple concept like a static site and adding a ton of complex tooling around it because it is the trend now?<p>Why would you even need a docker image to run a static website? The best thing about a static website us you can host it everyone without requiring any extra resources, like putting it directly to some CDN as files, etc.
This post says a lot more about the javascript ecosystem than Docker. Multi-stage image builds are nothing new or extraordinary, and in fact it's Docker 101. However, being forced to install 500MB worth of tooling and dependencies just to deploy a measly 30MB static website on nginx is something unbelievable.
I'd suggest changing this:<p><pre><code> COPY package.json .
RUN npm install
</code></pre>
to this:<p><pre><code> COPY package.json package-lock.json .
RUN npm ci
</code></pre>
`npm ci` installs the exact dependencies specified in the lockfile. This way, transitive dependencies that were upgraded via `npm audit fix` are guaranteed to be installed.
It therefore forces the image to be rebuilt when a transitive dependency changes. Copying only the package.json wouldn't do that.
It also errors if the lockfile and the package.json are inconsistent.<p><a href="https://docs.npmjs.com/cli/ci.html" rel="nofollow">https://docs.npmjs.com/cli/ci.html</a>
3 minutes to build a static blog (with a grand total of five posts) that doesn’t look any different from decade-old blogs. Pulling hundreds of MB from the Internet in the process. Wow.<p><a href="https://github.com/herohamp/eleventy-blog/tree/master/posts" rel="nofollow">https://github.com/herohamp/eleventy-blog/tree/master/posts</a>
You can probably shrink it even more. The Caddy alpine image is 14MB compressed.<p><a href="https://hub.docker.com/_/caddy/" rel="nofollow">https://hub.docker.com/_/caddy/</a><p>You also get automatic TLS certificate management and tons of other goodies that nginx doesn't offer out of the box.
in my eyes this is all madness. deploying sites via github to some docker shit.<p>how about good old ftp and a cheap shared webhost?
like its been done for 30 years.
Sometimes you can get space savings on docker images from seemingly odd sources. For example, I found that running a chown command on files after they've been COPY'd in bloats the image size significantly (100s of MB). However, at some point Docker added a "--chown" flag to the COPY command which brings it back in line.
Why even have a docker image for a static site? If the site has to be built like this one then just put the output of the build process behind some webserver. We were doing this back in the 90s and didn't have to write blog posts about how <i>not</i> to make your site 400MB.
You could even use nginx on scratch to remove alpine as well.<p><a href="https://github.com/gunjank/docker-nginx-scratch/blob/master/Dockerfile" rel="nofollow">https://github.com/gunjank/docker-nginx-scratch/blob/master/...</a>
This approach is akin to installing all of the build tooling inside of Docker, <i>then</i> generating the build artifact. I'd think it'd be even slimmer to generate the build artifact first, then just copy that into the container.<p>Is there an advantage to building inside of Docker?
Aside: one should not use package.json to install dependencies. Use either package-lock.json (and command "npm ci") or yarn.lock (and... I forget). Keep the lock file as part of repo too, or each build could be different.
Can we put "Docker image" in the title somewhere? Otherwise, it seems like the article is talking about the site itself (i.e. having less JavaScript, optimizing the images, …)
Back in 2015 when cloud offerings were still marginally new, a lot of big providers were gettting into the game with Docker offerings (ie, IBM Bluemix) where the charge was based entirely on RAM*Hours.<p>Naturally this lead to me gaming the system and making my docker images as in RAM usage small as possible. In the end I even abandoned SSH as too heavy and switched to shadowsocks (2MB resident) for networking the docker instances together.
> This docker image resulted in a 419MB final image and took about 3 minutes to build. There are some obvious issues with this. For instance every-time I change any file it must go through and reinstall all of my node_modules.<p>He doesn't tell whether there have been any build time improvements after the changes to the Dockerfile. Will the builder docker images get cached and thus reduce the build and deployment time?
If you're using multiple steps anyway, there's no need to use nginx base on every step.<p><pre><code> FROM nginx:1.17.10-alpine as npmpackages
RUN apk add --update nodejs npm
</code></pre>
Just do:<p><pre><code> FROM node:10
RUN npm [...]</code></pre>
tl;dr author discovers multi-stage build to throw away useless nodejs dependencies.<p>Same also applies to even Java. Maven downloads tons of stuff these days. You may only be using single static string from a dependency.