Containers are great for shipping code to Prod, but my friends and I find them frustratingly painful for local dev: I have to wait on an image build to do anything, it's easy to accidentally invalidate the Docker layer build cache, I don't get my language's build cache unless I jump through extra hoops to mount it into the build image, I sometimes need to deal with file perm mismatches when mounting, attaching a debugger becomes a remote debugger incantation, and sometimes the language itself just seems to make containerization painful (looking at you, Rust).<p>Am I missing a tool or something? Shouldn't I be able to run my server in my IDE and proxy it into a Compose network or Kubernetes namespace, so I get my IDE tools for free? Or at least have my Docker container run in "watch" mode, where a change to one of the files the container is based on restarts the process with the new files?
You prepackage an image with everything you need to run your app, then you mount your local directory as a volume where you COPY it into your container (the volume mount will override that file system path) and now you can run your container, edit your code, see live-reload do its thing (if you have that) and when it’s time to deploy - simply don’t mount your local directory and let the COPY do its thing.<p>Also, any 3rd party things (database for example) can be done with a docker-compose.local.yml omitting your app image and instead building it from .
It's a lot to learn, but Nix solves your issues precisely.<p>You can build lean, layered Docker images with ease with it. And deploy those to container services like any other.<p>But you don't have to use those containers for development. You use Nix to set up your dev env (a lot will come for free after you have your code packaged for the container).<p>Nixpkgs has support for most mainstream languages nowadays, with varying levels of popularity. The more popular ones will have more polished Nix integration.<p>Now, if you _do_ want to use the container locally, you can do that too. And it will benefit from non-fragile caching thanks to Nix.<p>But tbh, if you need to replicate prod precisely locally to do local dev, you should probably consider..figuring out how to build and test your components with confidence in isolation. Local simulation of prod can be useful sometimes but if it's your default, you can do better.
You don't necessarily need to run the application in a built container image every time. A container can be a repl (shell) where you just use it like a regular terminal shell, run your app. Building a container image every time you make a change in your editor doesn't sound optimal at all. Alternative you could also just run containers for all other things your app is using (db, caching server, etc) and the app runs in a regular terminal, with container stuff being bound to local ports your app can talk to.<p>Sounds like you need to re-access how you're using and thinking about containers when doing dev locally.<p>Look into running a container with the stuff you app needs installed, but running a shell instead of your app directly. Then look into mounting your source directory in the container using docker's or whatever container tool your using. Then things like auto-reloading (if your app supports it should work using inotify-tools).<p>And by "app" I'm referring to whatever your developing, most likely some kind of backend server?
Some IDE's support development in a container. The IDE becomes a thin UI client and sends commands over a socket to the container where the files are and any builds/commands execute.<p>I've only used the VS Code version, but it appears the Jetbrains IDEs support the concept as well.<p>VS Code injects a binary into your normal development container definition to create the "bridge". Local development files can be mounted into the container environment as well if you want the container to remain ephemeral.<p><a href="https://code.visualstudio.com/docs/devcontainers/containers" rel="nofollow">https://code.visualstudio.com/docs/devcontainers/containers</a><p><a href="https://www.jetbrains.com/help/idea/connect-to-devcontainer.html" rel="nofollow">https://www.jetbrains.com/help/idea/connect-to-devcontainer....</a><p><a href="https://www.jetbrains.com/help/idea/remote-development-overview.html" rel="nofollow">https://www.jetbrains.com/help/idea/remote-development-overv...</a>
Very good questions.<p>Here are a few pointers:<p>Podman runs rootless and avoids (some of) the problems with permissions this way.<p>It's possible to mount the source directory (when running a container) over the place you copy it to (when building it) so you can start a container once and rebuild and test inside it while you edit outside of it.<p>I think that containers are a good reason to make a technical distinction between unit tests and integration tests. The former should work outside the container to facilitate quick development whereas the latter can rely on the environment the container provides. That setup saves a lot of headache for configuring paths and dependencies.<p>Finally, i find it very important that building the software and executing the unit tests should be possible outside the container. This way you can always use your local setup, maybe after some tweaking. This tweaking is the (small) price everyone has to pay every now and then. That way the build environment doesn't run stale. Imagine developing software with a frozen tool stack packed into a container ten years ago. Because that's what happens when everyone just uses the image.
VS Code takes an opinionated view of this with "dev containers". Other IDEs (including JetBrains) have support as well. It's probably worth looking into a little, whether you decide to use them or not, to understand why they made some of the trade-offs they chose. I wrote a little blog post as an intro a while back: <a href="https://www.mikekasberg.com/blog/2021/11/06/what-are-dev-containers.html" rel="nofollow">https://www.mikekasberg.com/blog/2021/11/06/what-are-dev-con...</a>
On routing, make sure any endpoints used between containers are (1) configurable, and (2) using the docker internal network naming conventions when working locally.<p>For example I have a compose with 10+ containers in it. Each container that needs to talk to another has some kind of environment property to tell it the name of that other container. So the "api" container might have a property called DB_HOST="db", "db" being the name of the db container.<p>Now, when developing i.e. the "api" image locally, your local dev server should be configured in the same way, providing the DB_HOST property to your local dev server environment. By doing this, you can completely stop the "api" container, allowing the local dev server to take its place, configured to talk to your other containers running in the docker network.<p>This way you are maintaining the local dev server setup that we've been using for ages and not developing directly on a docker image or dependent on its build cycle, etc.
The idea would be to build a base image that has all the dependencies for your app and then treat it like a VM. Code would get mounted via shared volume into that container. So as your code changes, it changes in the container, and does not facilitate a rebuild.<p>IE, instead of building a fresh container on every code change, you only build a fresh container when your python version changes. You start a container and then from within it, you install your python packages. Or take it a step further, and your container will get baked to include dependencies and only rebuild when the dependencies change. The production container would inherit or be downstream from this, so that all the prod builds contain everything and are artifacts.<p>Replace python with rust, golang, etc. Doesn't matter.<p>The key is that you will need to abstract a base image, and then fork that into the dev image and the prod/stage/deployable images.
Containerless dev is just better, because it's a ton of extra work to get things as well. Any chance you can <i>not</i> use containers for local dev?
I gave up mostly.<p>Nowadays I use containers for the services, like redis, postgres, etc. But the app runs locally for dev. Works fine for standard web stuff.
Like others have said you don't need to have the application itself be a container locally. As long as it builds properly to an image it's fine. The only local container I use is one for a DB.
>I don't get my language's build cache unless I jump through extra hoops to mount it into the build image<p>This is what you should be doing, and you should not be building your artifact with docker build during development. If you can help it, you don't want to compile your application inside of a container at all. Build it outside and COPY it when you're ready to ship, or use a volume during development (docker run -v)<p>If you cannot rebuild outside of the container, you should be able to build your build environment as an image once, then exec into the running container to rebuild there, but you should NOT be rebuilding your docker images for each compile loop. It sounds like that's where you're encountering the pain.<p>If you are rebuilding your docker image every time you recompile your application, you're doing it wrong.
My pain ended after doing the following:<p>- Install my own Git using Gitea<p>- Install my own Repository instead of using Docker Hub<p>- Install Portainer<p>- Configure Gitea to use workers + actions<p>- Write the needed YAML to build the image, upload to local registry<p>- Configure hook on Portainer to recreate stack if image was updated<p>Of course there is a slight delay while the image is building, but I don't have to touch anything at all, just code, commit and a couple of minutes later image is up and running.
I’m not even sure they are so great for shipping code to production.<p>Slow build times, slower execution times, annoying keeping them updated, especially with k8s.
Skaffold does much of what you are looking for.<p><a href="https://skaffold.dev/docs/" rel="nofollow">https://skaffold.dev/docs/</a><p>K8s manifest autoloading works, and IDE support is somewhat there. Not sure about build caches, should be possible I think.<p>Only problem is the Kustomize overlay syntax is a bit hard to grok. You can also use Helm or raw kubectl deploy commands.
My ideal is a starter that offers a nice blend of microservices and configures them for me just enough to get them working in easy-to-manage and organized way. Most importantly, they are all optional and easily removable.<p>I do this with npm scripts for "compose", "start", "stop", and "reset" for every service and tie it all together with dotenv for environment vars. Currently, I have dockerized Traefik (partially), Webpack (dev server only so far), Pocketbase, PostgreSQL, PostgREST, Swagger UI, PgTyped, and MongoDB under this and will soon also dockerize the Express-based RESTish API feature.<p><a href="https://github.com/dietrich-stein/typescript-pgtyped-starter">https://github.com/dietrich-stein/typescript-pgtyped-starter</a>
Tilt is pretty good for that, it will sync files into containers automatically (no rebuild) and can rebuild the image if some other files change (configured by you).<p><a href="https://tilt.dev/" rel="nofollow">https://tilt.dev/</a> (no affiliation, just a happy user)
I am far from a containers expert.<p>I have noticed however that systemd containers (nspawn) don't have layered images but seem to simply run against a root file system hierarchy that you put on the disk.<p>This seems to me much simpler than dealing with diffed layers or whatever other container solutions do.
I use docker (compose) a lot for my daily dev (on linux) to create and maintain web applications. Mostly Go backend, Svelte frontend, Mysql or SQLite db, Traefik or Caddy proxy, ...<p>I avoided a lot of your troubles by coding/running/debuging the main program (app server) outside of a container and letting "only" the infrastructure parts inside (db, mail, ...)<p>Only at release time that I embed the server part in a container.
1. With Docker you can create derived containers by a kind of inheritance. You can add your own packages to the container. For instance, I add Vim and whatnot. Use the customized image for your local development; your CI builds will use the stock one.<p>2. You can step into Docker containers so that you can work inside, iterating on builds and such. If you have a scriped workflow that launches a Docker image to do a build, crack it open and develop a more interactive alternative.
Have a look at <a href="https://www.bunnyshell.com/#cde" rel="nofollow">https://www.bunnyshell.com/#cde</a><p>It allows you to use your local IDE to edit the code, but the actual container runs in the cloud.
It allows the user to define and create thin or full environments (any number of services) running in the cloud, so no load on your local.
Full support for debugging.<p>disclosure:
I work at bunnyshell.
You should check out Devbox (<a href="https://jetpack.io/devbox" rel="nofollow">https://jetpack.io/devbox</a>) if you want local dev without the container overhead.<p>It provide a nice interface for creating native, local dev environments using the Nix package manager, which is especially helpful if you or your friends struggle with the Nix language. It also lets you use your local tools with your dev environment.
My gripe with Docker / Podman is that it's unlike the VM - no init services, no SSH.<p>Incus (and LXD) make containers work in pretty much the same way as the VM, just without emulation overhead. You have prebuilt images with rich standard toolkit, systemd and services, SSH, networking is configurable from within in familiar ways.
I haven't personally found any advantage to using containers in a local dev environment. I probably never worked out how to do it right, but my experience is that using them just adds complexity, inconvenience, and additional points of potential failure without giving any noticeable benefit.
I installed OS similar to prod and done task with it, that help to reduce research time. Only something does not work on QA, then I run container to test what is the difference. The answer is just to love your prod OS, not put it on the dev container for a fear.
Don't use them.<p>Use systemd in prod to contain your apps automatically on launch. chroot the app and mount only the paths it needs with nearly everything as read only.
do the tools you use to ship containers to prod and other stages not work locally?<p>IME a monorepo is nice here. all app code and infra code live side by side, and while running the containers locally is not an ideal dev experience, it's at least accessible and enables consistency across environments.
The most correct answer is that you need to build a base image, as many others already told you.<p>Another thing that I would question is why would you be running containers locally so much it becomes a problem?<p>As you said, containers are great for shipping code; use them for it. Locally, run your code in the current environment.<p>You should only run a container locally if you need to debug an error in production that you suspect is related to the environment.