This is part of the bigger Macpine project, which to me is much more interesting than LXD: <a href="https://github.com/beringresearch/macpine" rel="nofollow">https://github.com/beringresearch/macpine</a><p>"""
The goal of this project is to enable MacOS users to:<p>Easily spin up and manage lightweight Alpine Linux environments.
Use tiny VMs to take advantage of containerisation technologies, including LXD and Docker.
Build and test software on x86_64 and aarch64 systems
"""
So is there some canonical guide to running a docker compose style app on Mac m1 machines that has good filesystem performance? It seems like there’s many ways to approach the topic now so it’s hard to tell which one is “winning”.<p>I’d love to containerize all of my local development efforts (scripts and rails apps) but the slow ass filesystem always ruined it in the past.
This is cool and a worthwhile thing, but how is this different than the many (b/x)hyve clones and others based on QEMU that use MacOS’s virtualization framework to run a minimal Linux for containers? What’s the differentiator that makes this better (hopefully?) from what’s come before?
Keep in mind lxd can manage two types of “containers” these days - the traditional cgroup-based kind that runs as its own set of processes on top of the hosts kernel with isolation, and traditional qemu-backed virtual machines. The user experience is homogeneous but the backing engines are different as noted here.
I like the progress that is being made for running containerized workloads on macOS. In my case I like some of the benefits of running the workload on a remote machine; such as no fan noise, less heat, less power consumption (especially on laptops). However the downsides can be also quite annoying, such as file sync times or IDE input lag.<p>My current setup is to have both data and workload run on a remote machine and I connect to it via SSH. I can either run neovim inside or use the remote development plugin from VSCode. But as mentioned, the input lag can be very annoying. I’m wondering if there’s another setup where I can still retain some of the upsides of running the workloads remotely and still having a decent user experience (reduced lag)
At first, I thought this was based on a syscall compatibility layer like Solaris' Linux Zones or WSL1 (RIP), or the Linux support in FreeBSD and NetBSD.<p>If you've ever tried to spin up a whole bunch of Docker containers in WSL2 and watched `vmmem` memory and CPU usage explode, you know that 'near-native speed' in VMs comes with lots of asterisks.<p>Does macOS have usable native <i>macOS</i> containers yet?
This is good news.<p>I come from the development background and the number one use case of containers on macOS is development enviroments, as on Windows too. For this use case, file system IO has always been bottleneck, not CPU. I do not know if there is some silver bullet in the horizon that could make this faster.
It uses almost same mounting tech as colima (9p)<p>Macpine: <a href="https://github.com/beringresearch/macpine/blob/71788e9c3c09cd11383885e4f8dd836ca34f0f8a/qemu/ops.go#L247" rel="nofollow">https://github.com/beringresearch/macpine/blob/71788e9c3c09c...</a><p>colima: <a href="https://github.com/abiosoft/colima/blob/7ebcf14a69158afa43b23c4a5fd7c0b39122c1a2/embedded/defaults/colima.yaml#L97" rel="nofollow">https://github.com/abiosoft/colima/blob/7ebcf14a69158afa43b2...</a><p>So it seems that it has same performance as colima project as well.<p>As for IO performance, see this colima issue <a href="https://github.com/abiosoft/colima/issues/146#issuecomment-1025392658" rel="nofollow">https://github.com/abiosoft/colima/issues/146#issuecomment-1...</a>
I experimented with LXD containers on Linux recently, but I found the technology it builds on (cgroups) too hard to wrap my head around, and tutorials leave me in the dark.<p>E.g. here is page 2 of one tutorial: <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-relationships_between_subsystems_hierarchies_control_groups_and_tasks" rel="nofollow">https://access.redhat.com/documentation/en-us/red_hat_enterp...</a><p>All these rules made no sense to me, and while I suppose they become clear at some point, I like my tutorials to be clear from the start.
Too bad almost all containers rely on bridged networking and different ports. Why not just bind to 127.0.x.y, where x is the project number, and y is the machine number. That way, you can just use default ports
this is so .. exciting! .. however please recall that you the user are now using hardware that is remotely run in most cases by the OS vendor (and who-knows-what-else), with opaque code executing at multiple layers.