> # install chromedriver<p>> RUN apk update<p>> RUN apk add chromium chromium-chromedriver<p>These kind of poor examples lead to a huge amount of waste when using Docker because people learning it are not taught about how layers interact, leading to ridiculous things like 'COPY' followed by 'RUN chown'.<p>Layers are a _core_ part of how Docker works, why not make this example "correct" by doing:<p>> RUN apk --no-cache add chromium chromium-chromedriver<p>Then just a comment like this: "it's important to group layers if possible to reduce your image size. By using `--no-cache` apk will update at the same time as installing".
I'll admit it took me a while to realise that Docker containers aren't magical apparatuses that can run software. They are just an OS running like a VM. Yes there's differences but I really wish learning resources began with that.<p>I think a lot of it has to do with the experts accidentally talking past beginners, missing a lot of the basics before getting into teaching abstractions.<p>It also reminds me of the feeling I'm experiencing now about learning Elasticsearch. I'm amazed just how few JSON examples I can find online for the API. It was amazing how much it helped for a peer to say, "an index is a table, a document is a record and it's kind of like monogdb."<p>Furthermore this all reminds me of wrong atomic models in high school. Please just teach me a really simple but wrong explanation then slowly work out the details.
> The actual way containers work is a complex topic that I will not get into here, but overall the concept is simple: give me an operating system (OS) level virtualization so that I can play around with different stuff in isolation.<p>Sad to see he gave up before he started, and instead of explaining what Docker is, went off into the docker and docker-compose CLI commands. What an opaque explanation too :(<p>Docker is hard to explain, and the official documentation won't help you understand it. I'm sad that so few people are self aware enough to combine just the right, minimal depth of concepts about kernels, operating systems, systemd, namespaces, and the fact this all only works on Linux, to make a truly approachable explanation. Most developers are really bad at teaching, they only describe things they already know, vs actually trying to teach something.
Docker is 1) namespace and chroot for separating processes 2) cgroups to limit hardware resources (CPU/RAM) . This allows for this "packaging" and kind of "sandbox". In addition, it adds a file-saving feature by using a layering filesystem.<p>Please watch this awesome presentation: <a href="https://www.youtube.com/watch?v=zGw_xKF47T0" rel="nofollow">https://www.youtube.com/watch?v=zGw_xKF47T0</a>
The title was promising but it seems that the author hasn't noticed that Docker the company uses the same name for many different things. On Mac and Windows, Docker is a VM. On Linux it's what this article discusses. I don't even know what Docker Enterprise is.
> We are going to containerize our app, use container orchestration tools for deployments, and we have to install Docker.<p>you do not. you may also install Podman. Docker does not "own" containers, there is an open standard for containers that any vendor may implement.
The post emphasize that docker (or its proper name, a container) provide weaker isolation compare to VM but I wish author can expand on this topic a little bit.<p>Also I'm disappointed that the good old chroot is not mentioned, or the BSD jail system.
> therefore, you will be externalizing these values and decouple them from the application, which will give you great flexibility in the long term<p>I think this is one of the worst "Best Practices" ideas that are parroted by people who haven't thought deeply about the issue. It's really a bad legacy from the era when most software was actually <i>distributed</i>. Now that most software runs in environments that are controlled by the same organization that developed the software, the principle is far less valuable.<p>Nowadays, most software should have most of its configuration information - paths, DB URLs, HTTP endpoints, etc - hard-coded into it. This strategy follows the "convention over configuration" philosophy, and it gives you a range of benefits. First of all, you can run tests on your config to make sure everything is working properly (check various files are present, do a SELECT * LIMIT 1 from DB tables, etc). You can catch config errors at compile time, eg by using enums like prod/dev/qa to represent environment names. And it prompts you to apply a refactoring mindset to your config - when you notice that your config code is repeating itself extensively, you'll realize this and be able to take steps to refactor, standardize, and simplify the config.
I'm currently working on implementing Docker support for a multi tenant service where users can login to their IDE from a Chromebook or what not... It seems docker was not designed for that use case. And every tutorial out there assumes you are running as root and have the Docker daemon installed on your local system...
> What Exactly is Docker?<p>For 99% of the world, the answer is "a file format like .tar.gz except composable".<p>These guys are really missing their target audience needs by a mile.
Docker is bad because while docker images crudely compose sequentially (the fs layers), docker files don't compose at all.<p>The goal of Nix and Nixpkgs is to have effient recipes for building <i>everything ever, in all configurations</i>. The docker ecosystem could never get there.<p>Now containers do make sense for deployment, but that has little to do with docker, as those docker replacements for kubernetes demonstrate.
I thought Docker is a common standard that defines. How archive format? How build file format? How the config format? What operation system need to support in order to be fully compatible with docker cli?<p>Instead of a cli tool.<p>Because there are already standalone docker implementations that implemented with completely different technology. Just like docker on windows (the one runs exe).
I have a SQL library that I maintain and it supports Postgres, Maria, MySQL and SQLite. I use Docker for integration testing and it works really well. I have no need for these db engines otherwise, so making them just completely go away when not working on the library is excellent.