It depends on the context, I don't know about corporate persons with profit incentives but if we're talking human persons then containers don't solve anything. They're just the symptom of the disease that is future shock. The underlying libraries we depend on just change too fast now and no devs care about forwards compatibility so we end up with all OS/Distros having libs that stop working in about a year (or more like 3 months with Rust/JS/etc).<p>The solution has to either come in the form of static compilation, or, even less feasible, getting devs to actually care if their software runs on platforms more than a year old. Containers just make everything worse in all cases beyond the contrived "it just worked and I never need to change anything".
This looks more like an advertisement than a useful blog post.<p>Also:<p>> Consider also that Docker relies on Linux kernel-specific features to implement containers, so users of macOS, Windows, FreeBSD, and other operating systems still need a virtualization layer.<p>First, FreeBSD has its own native form of containers and Windows has its own native implementation. Docker != containers.<p>I really don't see how Docker (or containers as we mostly know them) relying on kernel-features from an open source operating system in order to run Linux OS images as something to even complain about, and there is nothing preventing Mac from implementing their own form of containers.
I think the next step(s) will be something closer to what the combination of Cloudflare Workers + KV + Durable Objects gives you... I think there also needs to be some implementation of PubSub added to the mix as well as a more robust database store. Fastly has similar growing options, and there are more being advanced/developed.<p>In the end, there's only a few missing pieces to offer a more robust solution. I do think that making it all webassembly will be the way to go, assuming the WASI model(s) get more flushed out (Sockets, Fetch, etc). The Multi-user web doom on cloudflare[1] is absolutely impressive to say the least.<p>I kind of wonder if Cloudflare could take what FaunaDB, CockroachDB or similar offers and push this more broadly... At least a step beyond k/v which could be database queries/indexes against multiple fields.<p>Been thinking on how I could use the existing Cloudflare system for something like a forum or for live chat targeting/queries... I think that the Durable Objects <i>might</i> be able to handle this, but could get very ugly.<p>1. <a href="https://blog.cloudflare.com/doom-multiplayer-workers/" rel="nofollow">https://blog.cloudflare.com/doom-multiplayer-workers/</a>
Yes containers don't solve for dealing with the mess of third party saas that every company is built around.<p>But that's why anytime you integrate with one of these tools you should be aware that there is a cost for maintaining that integration.
I spent 6+ years fighting this exact battle. It's hard. It's resource intensive. And timing is everything. It requires either one company to front all the development cost and bring it to the world after validating it or it needs an ecosystem to emerge through a shared pain and understanding. We're not there yet.<p>My efforts => <a href="https://micro.mu" rel="nofollow">https://micro.mu</a><p>Oh and prior efforts <a href="https://github.com/asim/go-micro" rel="nofollow">https://github.com/asim/go-micro</a>
Author here. I have been developing Docker applications for years now, and while the experience is better than it used to be, it's still not great. I work for Deref, which is working on developer tooling that is more amenable to modern development workflows. We'd love to hear what pains you have with the current state of development environments.
the one thing containers addressed was their use as a countermeasure to rising costs from greedy VPS providers, and as an agile framework to quickly evacuate from a toxic provider (cost, politics, performance, etc...)<p>providers in turn responded by shilling their 'in house' containerization products and things like Lambda for lock-in.
Virtual Machines gained popularity as are kludge to get around the remarkably horrible state of operating systems. The inability to reliably save and restore the state of a computer grew to be so costly that it became worthwhile to pay the performance penalty of a layer of emulation/virtualization to route around it.<p>Containers were the next logical step, as each virtual machine vendor tried to lock in their users. Containers allowed routing around it.<p>Both of these steps could be eliminated if a well behaved operating system similar to those in mainframes could be deployed, so that each application sat in its own runtime, had its own resources, and no other default access.<p>There's a market opportunity here, it just needs to be found.
Since the author mentionned it, is the 12 factor app still a best practice? Was it a best practice? I saw the website a few times and all of it makes sense for me, but I haven't seen much discussion about it.
Containers don't solve anything more than virtual machines. Containers are 'better' than virtual machines because they have less overhead and are 100% open source.<p>Containers and VMs let you divide and solve problems in isolation in a convenient manner. You still have the same problems inside each container.<p>Firstly, Docker & k8s made using containers easy. Minimal distros like alpine simplify containers to a set of one or more executable. You could implement the same thing with a system of systemd services & namespaces.<p>But now that everything was a container, you need a way to manage what & where containers are running and how they communicate with each other.<p>It looks like 90% of the stuff different container tools and gadgets try to solve is the issues they created. You can no longer install a LAMP stack via 'apt install mysql apache php7.4' so instead you need a tool that sets up 3 containers with the necessary network & filesystem connections. It certainly better because it is all <i>decoratively</i> defined but it is still the same <i>problem</i>.<p>This is why I mostly stayed out of containers until recently. The complexity of containers really only helps if you need to replicate certain server/application. You will still need to template all of your configuration files even if you use Docker, etc.<p>What is changing everything IMO is NixOS because it solves the same issues without jumping all the way to Docker or k8s. Dependencies are isolated like containers but the system itself whether it is a host/standalone or a container can be defined in the same manner. This means that going from n=1 to n>1 is super easy and migrating from a multi-application server (i.e a pet server) to a containerized environment (i.e to a 'cattle' server/container) is straightforward. It's still more complex and a bit rough compared to Docker & k8s but using the same configuration system everywhere makes it worthwhile.
the one problem containers solved for me better than anything I ever used in previous UNIX/LINUX is heirarchical resource tracking. I work with many codes that fork from their main binary and do their work in subprocesses. If your resource manager isn't scraping /proc to invert the process tree, it needs a way to assign resources to process trees such that the entire tree sum cannot exceed the resource limitation.