I know docker for a long time but I never used it in serious projects. I find it hard to find good and especially up to date tutorials. I’m especially interested in a tutorial on how to run (feature branch) containers which spin up on bitbucket pipelines/GitHub actions for manual and automated testing for PHP and Node applications (containers should exist as long as a feature branch exists). I found some random YouTubers which give a nice intro/demo into some features but not really a deep dive.
I would start with understanding what containers are. Read up on what namespaces and cgroups are. Understand first what a container is, what it gives you and how Docker (vs other containerizers) fits into the picture. The first fundamental thing to understand is that containers are merely processes that have some sandboxing and perhaps limits applied to them, mem_cg, CFQ throttling, etc.<p>Once you have that under your belt it's not hard to work out how Docker itself works and how you can use it to fulfill the sort of CI/CD objectives you have outlined. Docker itself isn't important, the semantics of containerization are.<p>Something that Docker (and Docker like things) take massive advantage of are overly filesystems like AUFS and overlayfs, you would do good to understand these (atleast skin deep).<p>Finally networking becomes really important when you start playing with network namespaces, you should be somewhat familiar with atleast the Linux bridge infrastructure and how Linux routing works.<p>Good luck!
The problems you describe have very little to do with docker itself, they sound more like integration challenges.<p>For example. Docker has absolutely zero knowledge of the branches lifetime or even branches at all. This is something you have to design using the existing capabilities of docker together with features or existing integrations provided by GitHub or bitbucket.<p>Of course knowing docker deeper will help you understand these boundaries better and use them.<p>One secret is that there is actually not much to it, most things are just variations of docker run and various tricks within docker build, sprinkled with some volume and image management like tagging and pruning. Other orchestrators like GH Actions, Compose, Kubernetes etc can be seen as building around these basic blocks.<p>If you already know these basics, you are probably going to learn faster by getting your hands dirty, trying to solve the scenarios you need, rather than binge watching tutorial#187 on YouTube.
Docker starts to become super useful when you have an application you are deploying that has a few `service` dependencies. Typical deployments include something like<p>1) your reverse proxy, Nginx/Caddy<p>2) your "app", or API, whatever. pick whatever you want, a Rails API, a Phoenix microservice, a Django monolithic app, whatever you want.<p>3) your database. Postgres, whatever<p>4) Redis - not just for caching. Can use it for anything that requires some level of persistence, or any message bus needs. They even have some plugins you can use (iirc, limited to enterprise plans... maybe?) like RedisGraph.<p>5) elasticsearch, if you need real-time indexing and search capabilities. Alternatively you can just spin up a dedicated API that leverages full text search for your database container from 3)<p>6) ??? (sky is the limit!)<p>I prefer docker compose to Kubernetes because I am not a megacorp. You just define your different services, let them run, expose the right ports, and then things should "just work"<p>Sometimes you need to specifically <i>name</i> your containers (like naming redis container `redis`, and then in your code you will have to use `redis` as the hostname instead of `localhost` for example).<p>basically That's It (tm)
It sounds to me like you have a project that involves on-demand deployment of containers. Like it or not, the current standard for that is kubernetes. As someone is already saying on here, there will still be Dockerfiles and container technology involved, but not necessarily "docker compose" or "docker swarm." Just my guess, but I think that's part of why you're struggling to find the tutorials you're looking for, is that since 2020-ish (idk someone can correct my chronology, maybe 2018? 2021?) they've increasingly become k8s tutorials, not "docker" tutorials.<p>that's just my guess though :) Happy hacking!
If your goal is to "learn Docker", I have around 100+ free blog posts and ad-free YouTube videos at: <a href="https://nickjanetakis.com/blog/tag/docker-tips-tricks-and-tutorials" rel="nofollow">https://nickjanetakis.com/blog/tag/docker-tips-tricks-and-tu...</a><p><a href="https://github.com/nickjj/docker-node-example">https://github.com/nickjj/docker-node-example</a> is an up to date Node example app[0] that's ready to go for development and production and sets up GitHub Actions. Its readme links to a DockerCon talk from about a year ago that covers most of the patterns used in that project and if not some of my more recent blog posts cover the rest.<p>None of my posts cover feature branch deployments tho. That's a pretty different set of topics mostly unrelated to learning Docker. Implementing this also greatly depends on how you plan to deploy Docker. For example, are you using Docker Compose or Kubernetes, etc..<p>[0]: You can replace "node" in the GitHub URL with flask, rails, django and phoenix for other example apps in other tech stacks.
I use Docker from time to time, but one question I have is more about workflow. How are people editing files within existing containers without resorting to one of the following:<p>1. Rebuilding the entire container which often involves stopping and starting it, etc.<p>2. Manually running commands that copy the files into the container. This is irritating because if I forget which files I changed or forget to run the copy command I end up with a "half updated" container.<p>3. SSHing into the container. This is irritating because I have to modify the port layout and permissions of the container and later remember to "restore" them when I'm "done" making the container.<p>Thanks!
I'm not a fan of video tutorials, but Bret Fisher's[0] docker stuff is the exception. The production quality is flawless, the content is straight to the point, and his instruction is amazing. I cannot recommend it enough<p>[0] <a href="https://www.udemy.com/user/bretfisher/" rel="nofollow">https://www.udemy.com/user/bretfisher/</a>
You're really asking about CI patterns and practices, which are not specific to Docker. The question is the same if using VMs.<p>You want to learn more about your CI system and then try things out until you hit the harder / edge cases.<p>Some things to try or think about<p>- Push two commits quickly, so the second starts while the first is running.<p>- rebuild a a commit while the current build is executing. Which one writes the final image to the registry? How do you know?<p>- How do you tag your images? If by branch name, how do you know which build produced an image? If by commit, how do you know which branch?<p>- Do you want to run the entire system per commit, shutting it down at the end of a build? Do you want to run supporting systems for the life of a branch? How do you clean up resources and not blow up your cloud budget? Do you clean up old containers each build (from old commits on this branch)? How do you clean up containers after a branch is deleted?<p>- Build a CI process that triggers subjobs, because eventually you may want to split things up. If you push a commit before the last build's subjob triggers, does it get the original commit or the latest commit? CI systems have nuances, Jenkins always fetches the latest commit when a job starts for a branch, so you may not be testing the code you think you are.<p>- Do you use a polyrepo or monorepo setup? For poly, how do you gather the right version of components for your commit? For mono, how do you build only what is necessary while still running a full integration test?<p>- Should you be doing integration testing inside or outside of the build system?<p>One of the reasons content that addresses these questions is harder to find is that the answers are highly dependent on the situation and tools. My solutions to many are handled with a mix of CUE and Python. You'll be writing code in most solutions
I feel like you don't need a deep dive for what you're describing.<p>Start step by step.<p>Before building on Github Actions, build locally.<p>See if you can build and tag and image with the git SHA.
Then run your automated test command against the image/container.<p>Then see if you can write a github action doing exactly what you did locally.<p>Random blog posts have been more helpful in my experience vs youtube videos.
Step 0: Start with a device that "fully" supports docker.<p>This is the reason I gave up on learning docker properly. I had 3 devices at my disposal - M1 mac, a windows 10 pc and a rpi. The random errors I was getting made me quite frustrated. Keep a code diary and document your mistakes and solutions.<p>Also get a VPS. Never ever try a serverless solution when trying to properly learn docker. Also, do not try to do anything that involves GPU processing.
I would also be interested to know what the typical dev workflow is like these days when working with containers.<p>Like do you just run e.g. nodejs or javac locally and then "deploy" to a container, or do you have a development container where you code "in it", or is a new container built on every file change and redeployed?<p>At my current place of work, all of this is totally abstracted away so no idea how real world people do it!
It's funny that you asked this, because just the other day I was thinking about starting a series of posts about it, but then I thought "what the hell, it's 2023, everybody should know about how to use Docker already" and diminished the idea.
My docker mastery course just had 17 videos added on GitHub actions, my fav automation tool. It has vids and working yaml for container build/test workflows (yaml is open source at GitHub.com/bretfisher). I'm actually doing a workshop next week in Tampa at Civo Navigate called "Docker 101 in 2023" lol
Course coupons at bretfisher.com
> I find it hard to find good and especially up to date tutorials.<p>It is even hard to find undoubtedly holistically good examples for docker usage. Many people do many things in different ways, some better some not so good. One can often find good aspects of docker usage in projects though. Like "What kind of environment variables should to let the user pass in, to avoid having to hardcode them in the image and keeping things configurable?", or "How to use multi-stage builds?". It is up to the thoughtful observer, to identify those and adapt ones own process of creating docker images.<p>I don't see docker as some kind of thing, that one sits down with for a few evenings and then fully knows. More like a thing one picks up over time. One runs into a problem, then searches for answers of how to solve this problem in a docker scenario, then finds several answers and picks one that seems appropriate, then learns, whether that choice was a good one later on. Until then it works for as long as that solution works. It is not like docker is some kind of scientific thing, where there is one correct answer to every question. Many things in docker are rather ad-hoc developed solutions to problems. Just look at the language that makes a docker file and you will see the ad-hoc-ness of it all. Then there are limitations that seem just as arbitrary. For example limited number of layers (stemming from being afraid of too much recursion not being supported by Go and not "externalizing the stack"), not being able to change most of a container's attributes (like labels) while the container is running.<p>As for questions of CI and so on: I think they are separate issues, which are solved by having a good workflow for the version control system of choice. One could for example configure the CI to do things for specific brances. Like deploying only the master branch or deploying a test branch to another machine/server. But this has nothing to do with docker.
It seems that you are mainly interested in how to build preview environments for your app, these posts describe an approach to get there:<p>- <a href="https://softwaremill.com/preview-environment-for-code-reviews-part-1-concept/" rel="nofollow">https://softwaremill.com/preview-environment-for-code-review...</a><p>- <a href="https://softwaremill.com/preview-environment-for-code-reviews-part-2-ci-cd-pipelines-for-the-cloud/" rel="nofollow">https://softwaremill.com/preview-environment-for-code-review...</a><p>- <a href="https://softwaremill.com/preview-environment-for-code-reviews-part-3-reverse-proxy-configuration/" rel="nofollow">https://softwaremill.com/preview-environment-for-code-review...</a><p>While the examples use Gitlab, it shouldn't be very hard to port the same idea to a Bitbucket.
Julia Evan's How Containers Work! zine: <a href="https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work/" rel="nofollow">https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work...</a><p>It demystified a lot of docker features for me.
I have a recent course on LinkedIn Learning that digs into the basics of Docker! Check it out if you have a subscription: <a href="https://www.linkedin.com/learning/learning-docker-17236240" rel="nofollow">https://www.linkedin.com/learning/learning-docker-17236240</a>.<p>I'm in the process of making a follow-up to this that covers more advanced topics. Stay tuned.<p>I also have a course that shows you how to use Docker for the build-test-deploy loop, though some of it is a little stale. Check that out here: <a href="https://www.linkedin.com/learning/devops-foundations-your-first-project" rel="nofollow">https://www.linkedin.com/learning/devops-foundations-your-fi...</a>
I don't think spinning up a docker image through a Github Action is possible or if that makes sense for CI/CD but here is an example repo with Node [1]. There are two actions, one for Unit Tests and one that builds a prod image and pushes it to my Docker Hub account. I have a compose.dev.yml file to start the containerized services, and a compose.yml to do the same in production. For Prod it all depends on your prod cloud service you want to deploy your container to (e.g. Google Cloud Run) and there are Github Actions for them.<p>[1] <a href="https://github.com/vasilionjea/node-docker-template">https://github.com/vasilionjea/node-docker-template</a>
There is a good series of blog posts on Linux containers: <a href="https://www.schutzwerk.com/en/blog/linux-container-intro/" rel="nofollow">https://www.schutzwerk.com/en/blog/linux-container-intro/</a>
Go have a look at Gitlab's "Review Apps" -- this provides a mechanism that spins up branch-specific environments, and destroys them when the branch merges -- which sounds like what you are after. This is not so much about Docker (per se) but more about how to set up CI/CD related infrastructure.<p>We use this mechanism with AWS, the serverless framework and some terraform. It works well. With us, the only thing remotely container related is the runtime context for the CI/CD pipeline.<p>That being said, you could make this work against a k8s cluster, fargate, or just some build servers.
If you already have a decent understanding of Docker you could implement this without learning anything more about Docker. GitHub secrets, actions, `ssh`, and a simple VPS (like an EC2 instance) would work here, no?
Sounds like your challenge is more about integrating the containers in a useful pipeline to get ephemera (preview) environments. Like deploy a new env for each PR and update each env when it's branch gets a new commit.<p>An easy way to get ephemeral envs starting from your docker-compose definition is Bunnyshell.com. It uses Kubernetes behind the scenes, but it's all pretty much abstracted away from the user. There is a free so you can experiment.<p>Disclosure: I'm part of the Bunnyshell team.
<a href="https://docs.docker.com/" rel="nofollow">https://docs.docker.com/</a><p>I use it severel times a week. Buildx, Dockerfile, etc.
I would say you need to learn about:
- docker compose: because you can describe your entire project’s environment in it
- docker composes environment file and projects
- ci/cd system of choice
- probably something like ansible to be the glue between ci/cd and docker<p>Good luck!
There is <a href="https://devopswithdocker.com/" rel="nofollow">https://devopswithdocker.com/</a> but I don't know if it's the best source.
I wish I could learn more about the internals, caching is flailing sometimes (but maybe i'm doing stupid things), also image graph is important to know who uses what.
ChatGPT. I’m not even joking, it’s now my first step in learning new tech (but not too new, GPT-3 was trained on a corpus ending in 2021 I believe)<p>Ask it something like “Explain how to get started with Docker” and it will give you a bunch of steps in a reasonable order. Then ask it for details for each step, like:<p>“How do I install Docker on macOS?”<p>“Write a commented Dockerfile for an application written in $WHATEVER”<p>“Now write a commented Docker Compose file for this application and a Postgres database”<p>etc
If you just want to learn it for fun and home just install portainer and run pihole, you understand a lot more. Then learn to do it via command line. Else just go for kubernetes man. With Docker shim removed from kubernetes no one is using Docker professionally.
It's not that complicated, just do it? The best advice I can give is shove every bit of logic you can into scripts (bash, python, whatever) and call them from bitbucket/bamboo. Be as agnostic as possible of Atlassian. That way you can run locally, experiment and learn much more easily.