>docker swarm init<p>And just like that you have a cluster to run containers on. I really like the simplicity of Docker Swarm. I've been using it for at least five years and it's just worked.<p>During the COVID lockdown I got tired of having to open a UI (at the time I was using CapRover[0]) to edit any of the services I run so I decided to make my own PaaS with a nice CLI. Connecting to the docker socket is easy and the API is simple enough. It's been working no problem for the last two years.<p>The only complaint I have is that I can't see the user's IP for HTTP requests[1] but there is some hope in the form of Proxy Protocol[2]. I have no idea how complex the code for Docker Swarm ingress is, but I may spend a weekend in the near future scouting the code to get an idea. The current possible solution is to put a load balancer in front of the cluster that either sets the X-Forwarded-For header (or any of the equivalent ones) or speaks Proxy Protocol but I will avoid that solution for now.<p>I recommend Docker Swarm as a solution for anyone starting that doesn't want to spend hours and hours configuring a production environment. Even if it is just one node, you get services, replication, healthchecks, restart policies, secrets... And it all starts with that simple command, no further config needed.<p>[0]: <a href="https://caprover.com/" rel="nofollow">https://caprover.com/</a>
[1]: <a href="https://github.com/moby/moby/issues/25526">https://github.com/moby/moby/issues/25526</a>
[2]: <a href="https://github.com/moby/moby/issues/39465">https://github.com/moby/moby/issues/39465</a>
Trying to figure out what's unique here, but it seems like Proxmox is being used to create VMs that then run docker. Then docker on these VMs is used to spin up a bunch of containers. So really, it's just Proxmox -> VM -> Docker -> Containers. So it's dedicated docker VMs to coordinate containers...<p>I was expecting Proxmox's LXC capabilities to be used to scale up to 13000, but this is just VMs + docker allowing that. Seems like the same thing could be done with any LVM hypervisor and VMs? Can someone correct me if I'm missing something?
If you keep in mind that containers are only namespaces for the filesystem and network resources, this is not too different from running 13000 processes on the host without containers.<p>Comparable to building something with make -j64.
I strongly suggest any sys admin to look at the relatively new Proxmox Backup Server <a href="https://www.proxmox.com/en/proxmox-backup-server" rel="nofollow">https://www.proxmox.com/en/proxmox-backup-server</a> which makes full incremental backups so light thanks to a well enginered deduplication
Oh hey, you don't see a lot of Docker Swarm nowadays, though in my experience it's still a wonderful solution for getting started with container orchestration, that will take a lot of the smaller/medium scale projects pretty far, before you need to look at something else (e.g. Nomad or Kubernetes). There's a lot of benefit in being able to hit the ground running even when you're self-hosting your clusters and administering them yourself.<p>It comes available with an install of Docker, is easy to setup and operate, has great optional UI solutions like Portainer (analogue to Rancher for Kubernetes), has one of the lower resource usages for the orchestrator itself, as well as supports the Docker Compose specification, which in my opinion is far more usable than the Kubernetes manifests (though less powerful than Helm charts) and far more common than Nomad's HCL.<p>For my Master's Degree, I explored a comparison where I ran the same workloads across a Docker Swarm cluster and a K3s cluster (a great Kubernetes distro that's low on resource usage as well) and even then Swarm used less memory (~2x less than Kubernetes for the leader node both under load and when idle) and used a bit less CPU (~30% less for the leader nodes under load) as well. That said, K3s still performed admirably, at least in comparison to RKE which wouldn't even run in a stable fashion on the limited hardware that I had at the time.<p>Maybe one of these days I should run Proxmox in my homelab as well, instead of just something like Debian or Ubuntu directly on the hardware. Also, while Podman is great, Docker still seems like a dependable option just because of how common it is and given how it's gotten more stable over time (despite the arguable architecture disadvantages).<p>I think the only actual issues I've had since when using Docker Swarm have been using a network that ran out of addresses to assign to the containers (probably some default), some Oracle Linux bug where kswapd would top out the CPU when the swap got full, as well as some Debian bug years ago on an old version of Docker that caused networking to fail and the cluster needed to be re-created to fix it.
"Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should." Dr Ian Malcom, Jurassic Park<p>Not knocking this achievement though, it's awesome they were able to pack that many containers in one host.
This is all well & good, but I'm not sure what useful images you could actually run with 10 megabytes of memory each. The nginx containers provisioned would not have been able to do much, if anything.
Actually, I'm confused. They talk about running it in LXC containers, but where exactly he installs Portainer is still a mystery to me. Furthermore, the screenshots show a lot of VM icons, not CT icons. So where is this person using LXC?<p>edit: Wait, did they install docker/portainer on Proxmox bare metal? They say to access Portainer through the Proxmox host IP, but any CT or VM created on Proxmox would probably have its own IP on a Proxmox bridge. So the IP <i>should</i> be the IP of the VM/CT hosting the docker install, not the Proxmox host.
While there is merit to this post my main criticism running 13000 containers with zero load - essentially all the nginx processes are doing nothing - zero I/O, etc. after launch. Its a bit more interesting to see N# of containers running something synthetic that mimics a workload.<p>That said, containers are very lean (or can be with the right setup) given there is no kernel, drivers, etc to load.
It's not the same, but reminded me of this post [1] I wrote some time ago, about a 63-nodes EKS cluster running on VMs with Firecracker on a single instance.<p>[1]: <a href="https://www.ongres.com/blog/63-node-eks-cluster-running-on-a-single-instance-with-firecracker/" rel="nofollow">https://www.ongres.com/blog/63-node-eks-cluster-running-on-a...</a>
Tl;dr - there is nothing really special in Proxmox that lets run and manage 13000 containers. Author created 10 vms on Proxmox host and ran docker on them. You don’t really need Proxmox for what described.<p>When I first saw Proxmox I also wanted to see if I could use it to manage docker containers, but it doesn’t support it directly. For working with containers you need other tools, eg Kubernetes.
If you wanna watch a beefy Windows machine fall apart, run this under WSL. Even relatively modest Docker workloads completely destroy vmmem.exe (or vmmemWSL.exe, depending on your WSL version).
as a technology risk and compliance manager who is embroiled in a big disaster recovery/business continuity/ISO 22301 project at the moment, reading this headline made parts of me turn to dust and drop off.