Disclaimer: I work for Docker<p>For the security enthusiasts out there, Docker 1.10 comes with some really cool Security focused additions. In particular:<p>- Seccomp filtering: you can now use bpf to filter exactly what system calls the processes inside of your containers can use.<p>- Default Seccomp Profile: Using the newly added Seccomp filtering capabilities we added a default Seccomp profile that will help keep reduce the surface exposed by your kernel. For example, last month's use-after-free vuln in join_session_keyring was blocked by our current default profile.<p>- User Namespaces: root inside of the container isn't root outside of the container (opt-in, for now).<p>- Authorization Plugins: you can now write plugins for allowing or denying API requests to the daemon. For example, you could block anyone from using --privileged.<p>- Content Addressed Images: The new manifest format in Docker 1.10 is a full Merkle DAG, and all the downloaded content is finally content addressable.<p>- Support for TUF Delegations: Docker now has support for read/write TUF delegations, and as soon as notary 0.2 comes out, you will be able to use delegations to provide signing capabilities to a team of developers with no shared keys.<p>These are just a few of the things we've been working on, and we think these are super cool.<p>Checkout more details here: <a href="http://blog.docker.com/2016/02/docker-engine-1-10-security/" rel="nofollow">http://blog.docker.com/2016/02/docker-engine-1-10-security/</a> or me know if you have any questions.
> Docker 1.10 uses a new content-addressable storage for images and layers.<p>This is <i>really</i> interesting.<p>Sounds like the up/download manager has improved too. I did some early work adding parallel stuff to that (which was then very helpfully refactored into actually decent go code :), thanks docker team) and it's great to see it improved. I remember some people looking at adding torrenting for shunting around layers, I guess this should help along that path too.
Network-scoped aliases are really handy when dealing with a multi-container setup, so I'm really happy that they implemented this!<p>In previous versions, only the name of a container would be aliased to its IP address, which can make it hard to deploy a setup with multiple containers in a given network group that should address each other using their names (e.g. "api" host connects to "postgres") and then have multiple instances of those groups on the same server (as container names need to be unique).
For those interested in the user namespace support, the best post I found was <a href="https://integratedcode.us/2015/10/13/user-namespaces-have-arrived-in-docker/" rel="nofollow">https://integratedcode.us/2015/10/13/user-namespaces-have-ar...</a> (there are also some docs here <a href="https://github.com/HewlettPackard/docker-machine-oneview/blob/master/Godeps/_workspace/src/github.com/docker/docker/experimental/userns.md" rel="nofollow">https://github.com/HewlettPackard/docker-machine-oneview/blo...</a>)
Wow, user namespaces! That was quick!<p>EDIT: And a default seccomp profile! Did I miss the memo about containerisation suddenly becoming a competative industry?
Items of particular interest to monitoring and diagnostics:<p>1. docker stats --all<p>Built-in alternative over 'docker ps -q | xargs docker stats' which takes care of dynamic additions to the list.<p>For consistency, it would be nice to have a similar option in the API stats call to fetch statistics for all running containers.<p>2. 'docker update' command, although I would have preferred 'docker limit'.<p>Ability to change container limits at runtime:<p>- CPUQuota
- CpusetCpus
- CpusetMems
- Memory
- MemorySwap
- MemoryReservation
- KernelMemory<p>With this feature in place, there is no reason to run containers without limits, at least memory limits.<p>3. Logging driver for Splunk<p>Better approach is to enhance generic drivers to be flexible enough to send logs to any logging consumer.
I love the ability of specifying IPs but, I just want to give static IPs to my containers from my private network, and attaching to my already existing bridge does not work, I started daemon as following but no help<p>> ./docker-1.10.0 daemon -b br0 --default-gateway 172.16.0.1<p>> ./docker-1.10.0 run --ip 172.16.0.130 -ti ubuntu bash
docker: Error response from daemon: User specified IP address is supported on user defined networks only.<p>But my KVM vms work fine with that bridged network. I know I could just port forward but I don't want to, yes It seems I am treating my containers as VMs, but it worked so fine in default LXC, we could even use Open vSwitch bridge for advanced topologies.
Sadly, <a href="https://github.com/docker/docker/issues/3043" rel="nofollow">https://github.com/docker/docker/issues/3043</a> is still open, so no multicast support since 1.6...
For an overview of what's new in this release, check out the blog post: <a href="https://blog.docker.com/2016/02/docker-1-10/" rel="nofollow">https://blog.docker.com/2016/02/docker-1-10/</a><p>The highlights are networks/volumes in Compose files, a bunch of security updates, and lots of new networking features.
It's the danger of running against "latest" all the time...But it's been a day of chasing my own tail when creating a new cluster (Mesos, but that really isn't an issue) and using some tools built against the prior version (volume manager plugin, etc.) that break with updates to Docker.<p>It seems like if one piece gets an upgrade, every moving component relying on some APIs may need to be looked at as well.<p>Did a PR on one issue.<p>Currently chasing my tail to see if a third party lib is out of whack with the new version or it's something I did.<p>The whole area is evolving and the cross pollination of frameworks, solutions (weave, etc), make for a complicated ecosystem. Most people don't stay "Docker only". I'm curious to see the warts that pop up.
The --tmpfs flag is a huge win for applications that use containers as unit of work processors.<p>In these use cases, I want to start a container, have it process a unit of work, clear any state, and start over again. Previously, you could orchestrate this by (as an example, there are other ways) mounting a tmpfs file system into any runtime necessary directories, starting the container, stopping it once the work is done, clean up the tmpfs, and then start the container again.<p>Now, you can create everything once with the --tmpfs flag and simply use "restart" to clear any state. Super simple. Awesome!
I'd really-really need DNS for non-running containers, somehow. Nginx can't start if an upstream container is down, as its name won't be resolved.
Nice to see building from stdin working again.<p><a href="https://github.com/docker/docker/issues/15785" rel="nofollow">https://github.com/docker/docker/issues/15785</a>
ossreality 5 hours ago [dead]<p>Apparently no one else has been paying an ounce of attention... And you get downvoted for it. The HN way!
<a href="https://github.com/docker/docker/issues/19474" rel="nofollow">https://github.com/docker/docker/issues/19474</a>
Least of all you're forced to go through their DNS server which doesn't support TCP.
Boy, this is absolutely going to fuck people. Because I bet a bunch of people are going to run Go containers in 1.10 engine. And guess what happens when you send a Go app a DNS response, in UDP format, that is larger than 4096 bytes?
You get a panic and crash! Woohoo! And yes, there are DNS servers that incorrectly throw out UDP DNS responses larger than 4096 bytes.
Can't wait for my containers to fail because of fucking Docker putting a DNS service in Engine. Unacceptable. Docker should've realized they needed to think about this stuff all-the-why shykes was too busy picking fights with people as Kubernetes encroached on what he saw as "his" territory.
There's a reason that everyone is very excited about the rkt announcement today. Particularly amongst some Kubernetes users...
(In the interest of not tainting the waters, I do NOT work for Google)