LXC via Proxmox is great for stateful deployments on baremetal servers. It's very easy to backup entire containers with the state (SQLite, Postgres dir) to e.g. NAS (and with TrueNAS then to S3/B2). Best used with ZFS raid, with quotas and lazy space allocation backups are small or capped.<p>Nothing stops one from running Docker inside LXC. For development I usually just make a dedicated priviledged LXC container with nesting enabled to avoid some known issues and painful config. LXC containers could be on a private network and a reverse proxy on the host could map to the only required ports, without thinking what ports Docker or oneself could have accidentally made public.
Apples to oranges.<p>LXC can be directly compared with a small, and quite insignificant, part of Docker: container runtime. Docker became popular not because it can run containers, many tools before Docker could do that (LXC included).<p>Docker became popular because it allows one to build, publish and then consume containers.
LXC has been so stable and great to work with for many years. I have had services in production on LXC containers and it has been a joy. I can not say the same about things I have tried to maintain in production with Docker, in which I had similar experiences to [0], albeit around that time and therefore arguably not recently.<p>For a fantastic way to work with LXC containers I recommend the free and open Debian based hypervisor distribution Proxmox [1].<p>[0], <a href="https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/" rel="nofollow">https://thehftguy.com/2016/11/01/docker-in-production-an-his...</a><p>[1], <a href="https://www.proxmox.com/en/proxmox-ve" rel="nofollow">https://www.proxmox.com/en/proxmox-ve</a>
LXD (Canonical's daemon/API front end to lxc containers) is great -- as long as you aren't using the god awful snap package they insist on. The snap is probably fine for single dev machines, but it has zero place in anything production. This is because canonical insists on auto-updating and refreshing the snap at random intervals, even when you pin to a specific version channel. Three times I had to manually recover a cluster of lxd systems that broke during a snap refresh because the cluster couldn't cope with the snaps all refreshing at once.<p>Going forward we built and installed lxd from source.
My home server runs Nixos, which is an amazing server operating system: every service is configured in code and fully versioned. I also use this server for development (via SSH), but while Nixos can be used for development, it's relationship with VS Code, its plugins, and many native build tools (Golang, Rust) is very complicated, and I prefer not to do everything the Nix way, which is usually convoluted and poorly documented.<p>LXD is my perfect fit in this scenario: trivial to install on top of Nixos, and once running, allows for launching some minimal development instances of whatever distro flavor of the day in a few seconds. Persistent like a small VM, but booting up within seconds, much more efficient on resources (memory in particular), and - unlike docker - with the full power of systemd and all. Add tailscale and sshd to the mix, for easy, secure and direct remote access to the virtualized system.
I like the docker way of one thing, one process, per container. LXC seems a bit different.<p>However, an exciting thing to me is the Cambrian explosion of alternatives to docker: podman, nerdctl, even lima for creating a linux vm and using containerd on macos looks interesting.
The perfect pair<p><i>Containerfile</i> vs <i>Dockerfile</i> - Infra as code<p><i>podman</i> vs <i>docker</i> - <a href="https://podman.io" rel="nofollow">https://podman.io</a><p><i>podman desktop companion</i> (author here) vs <i>docker desktop ui</i> - <a href="https://iongion.github.io/podman-desktop-companion" rel="nofollow">https://iongion.github.io/podman-desktop-companion</a><p><i>podman-compose</i> vs <i>docker-compose</i> = there should be no vs here, <i>docker-compose</i> itself can use podman socket for connection OOB as APIs are compatible, but an alternative worth exploring nevertheless.<p>Things are improving at a very fast pace, the aim is to go way beyond parity, give it a chance, you might enjoy it.
There is continuous active work that is enabling real choice and choice is always good, pushing everyone up.
I use LXC containers as my development environments.<p>When I changed my setup from expensive Mac Books to an expensive work station with a cheap laptop as front end to work remotely this was the best configuration I found.<p>It took me few hours to have everything running but I love it now. New project is creating a new container add a rule to iptables and I have it ready in few seconds.
One major limitation of LXC is that there is no way to easily self host images. Often the the official images for many distributions are buggy. For example, the official Ubuntu images seem to come with a raft of known issues.<p>Based on my limited interactions with it, I'd recommend staying away from LXC unless absolutely neccesary.
I’ve been using LXC as a lightweight “virtualization” platform for over 5 years now, with great success. It allows me to take existing installations of entire operating systems and put them in containers. Awesome stuff. On my home server, I have a VNC terminal server LXC container that is separate from the host system.<p>Combined with ipvlan I can flexibly assign my dedicated server’s IP addresses to containers as required (MAC addresses were locked for a long time). Like, the real IP addresses. No 1:1 NAT. Super useful also for deploying Jitsi and the like.<p>I still use Docker for things that come packaged as Docker images.
I never hear systemd-nspawn mentioned in these discussions. It ships and integrates with systemd and has a decent interface with machinectl. Does anyone use it?
Is it accurate to say LXC is to Docker as git is to GitHub, or vim/emacs vs. Visual Studio Code?<p>I haven't seen many examples demonstrating the tooling used to manage LXC containers, but I haven't looked for it either. Docker is everywhere.
LXC and Docker comparisons vastly differ depending on the use case and problem segment. I use LXC as a tiny, C-only library to abstract namespaces and cgroups for embedded usage [1]<p>LXC is a fantastic userland library to easily consume kernel features for containerization without all the noise around it… but the push for the LXD scaffolding around it missed the mark. It should’ve just been a great library and that’s how we use it when running containers on embedded Linux equipment<p>[1] <a href="https://pantacor.com/blog/lxc-vs-docker-what-do-you-need-for-iot/" rel="nofollow">https://pantacor.com/blog/lxc-vs-docker-what-do-you-need-for...</a>
A while ago, I spent some time to make LXC run in a docker container. The idea is to have a statefull system managed by LXC run in a docker environment so that management (e.g. Volumes, Ingress and Load Balancer) from K8S can be used for the LXC containers. I still run a few desktops which are accessible by x2go with it on my kubernetes instances.<p><a href="https://github.com/micw/docker-lxc" rel="nofollow">https://github.com/micw/docker-lxc</a>
I know very little about both, but I'm at the mercy everyday with lxc on my chromebook when running crostini (it's like a VM in a VM in a VM in a...) :) - works great though, at some perf cost, and less GPU support.<p>And still having troubles running most of the docker images out there (either this, or that won't be supported). I guess it makes sense, after all there is always the choice of going with full real linux reinstall, or some other hacky ways.<p>But one thing I was not aware was this: "Docker containers are made to run a single process per container."
Interesting read, not sure why you compared only these two though.<p>There are a plenty of other solutions and Docker is actually many things.. You can use Docker to run containers using Kata for example, which is a runtime providing full HW virtualisation.<p>I wrote something similar, yet much less in detail on Docker and LXC and more as a bird-eye overview to clarify terminology, here: <a href="https://sarusso.github.io/blog_container_engines_runtimes_orchestrators.html" rel="nofollow">https://sarusso.github.io/blog_container_engines_runtimes_or...</a>
At the end the two are different.. why comparing the in the first place?<p>“ LXC, is a serious contender to virtual machines. So, if you are developing a Linux application or working with servers, and need a real Linux environment, LXC should be your go-to.<p>Docker is a complete solution to distribute applications and is particularly loved by developers. Docker solved the local developer configuration tantrum and became a key component in the CI/CD pipeline because it provides isolation between the workload and reproducible environment.”
LXC is quite different from Docker. Docker is used most of the time as an containerized package format for servers and as such is comparable to snap or flatpak on the desktop. You don't have to know Linux administration to use Docker, that is why it is so successfull.<p>LXC on the other hand is lightweight virtualization and one would have a hard time to use it without basic knowledge of administering Linux.
> Saying that LXC shares the kernel of its host does not convey the whole picture. In fact, LXC containers are using Linux kernel features to create isolated processes and file systems.<p>So what is Docker doing then??
I've been running my saas on lxc for years. I love that the container is a folder to be copied. Combined with git to push changes to my app all is golden.<p>I tried docker but stuck with lxc.
I think docker grew out of lxc initially(to make lxc easier to use), for now, lxc is light weight but it is not portable, docker can run on all OSes, I think that's the key difference: cross-platform apps. LXC remains to be a linux-only thing.