> A “container” is just a term people use to describe a combination of Linux namespaces and cgroups. Linux namespaces and cgroups ARE first class objects. NOT containers.<p>Amen.<p>Somewhat tangential note: most developers I have met do not understand what a 'container' is. There's an aura of magic and mystique around them. And a heavy emphasis on Docker.<p>A sizable fraction will be concerned about 'container overhead' (and "scalability issues") when asked to move workloads to containers. They are usually not able to explain what the overhead would be, and what could potentially be causing it. No mention to storage, or how networking would be impacted, just CPU. That's usually said without measuring the actual performance first.<p>When I press further, what I most commonly get is the sense that they believe that containers are "like VMs, but lighter"(also, I've been told that, literally, a few times, specially when interviewing candidates). To this day, I've heard CGroups being mentioned only once.<p>I wonder if I'm stuck in the wrong bubble, or if this is widespread.
I'm a bit disappointed it didn't go into detail into the way jails differ from zones. VMs I understand, but it seemed like the main point of the post was to distinguish containers from the other three.
Note this is from 2017. Previous discussion: <a href="https://news.ycombinator.com/item?id=13982620" rel="nofollow">https://news.ycombinator.com/item?id=13982620</a>
For my workload I've struggled to see the advantage containers would give me. Maybe someone here can convince me, rather than the current justification of 'docker all the things'.<p>We have servers, they handle a lot of traffic. It's the only thing running on the machines and takes over all the resources of the machine. It will need all the RAM, and all 16 vCPUs are at ~90%.<p>It's running on GCP. To rollout we have a jenkins job that builds a tag, creates a package (dpkg) and builds an image.
There's another jenkins job that deploys the new image to all regions and starts the update process, autoscaling and all that.<p>Can containers help me here?
So.... are any or all of these what you would call a process "sandbox"? Do operating systems make it easy to sandbox an application from causing harm to the system? What more could be done to make that a natural, first-class feature?<p>Like, let's say you found some binary and you don't know what it does, and don't want it to mess anything up. Is there an easy way to run it securely? Why not? And how about giving it specific, opt-in permissions, like limited network or filesystem access.
I do not understand docker on windows.<p>If I understand correctly, when I run a docker image on Linux then the dockerized processes's syscalls are all executed by the host kernel (since - again if I understand correctly - the dockerized process executes more or less like a normal process, just in isolated process and filesystem namespace).<p>Is this correct?<p>But how does docker on windows work?