This is something near and dear to my heart! The great thing about container images is the software distribution is based on static assets. This enables scanners to give teams actionable data without being on every host. This is a net new capability and I think enables better security in organizations who adopt containers. And unlike "VM sprawl" container systems are generally introspectable via a cluster level API like Kubernetes and scanning doesn't require active agents on every node. Two things that have happened recently in this space:<p>- Quay.io[1] offers scanning as a standard feature on all accounts including free open source accounts. This also includes notifications to external services like Slack. This is what it looks like when you ignore an image[1].<p>- The Kubernetes community has started automating scans of all of the containers that are maintained by that community to ensure that they are patched and bumped to the latest versions. A recent example[2].<p>The cool thing is that both of these systems utilize Clair[3] Open Source Project as a way of gathering up data sources from all of the various distribution projects. This all leads to the reason we feel automated updates of distributed systems are so critical and why CoreOS continues to push forward these concepts in CoreOS Tectonic[4].<p>[0] <a href="https://blog.quay.io/quay-secscanner-clair1/" rel="nofollow">https://blog.quay.io/quay-secscanner-clair1/</a><p>[1] <a href="https://quay.io/repository/philips/host-info?tag=latest&tab=tags" rel="nofollow">https://quay.io/repository/philips/host-info?tag=latest&tab=...</a><p>[2] <a href="https://github.com/kubernetes/kubernetes/pull/42933" rel="nofollow">https://github.com/kubernetes/kubernetes/pull/42933</a><p>[3] <a href="https://github.com/coreos/clair" rel="nofollow">https://github.com/coreos/clair</a><p>[4] <a href="https://coreos.com/tectonic" rel="nofollow">https://coreos.com/tectonic</a>
This is great research but I think an important point is missed. It may come across that these images are vulnerable because of some intrinsic property of using Docker however this is not the case. What is also important to point out is by adopting Docker this analysis actually becomes easier to do across an organization and similarly mitigation becomes easier as well.<p>I think another aspect that is missed is that just because you use a vulnerable image doesn't necessarily mean you are at risk of being compromised no matter what other security layers you employ. This gets to the practical scenarios of security operations.
Note that an image containing <i>vulnerable binaries</i> is not the same thing as an <i>exploitable cointainer</i>. A container derived from a full OS like Ubuntu will have many binaries to provide a standard environment, but most of them will never be touched by the running program. That year-old image might have a vulnerable Perl version, but nothing in the container even runs Perl, so it's a non-issue.<p>This is why many people can get away with a minimal base image like Alphine-- a tiny busybox shell provides enough features to run the application while still supporting some manual debugging with docker exec. It also avoids false positives like these, letting you more quickly find precisely what you need to upgrade when a new OpenSSL vulnerability is announced.<p>(Disclaimer: I work on Google Container Engine / Kubernetes).
One thing that does get a mention but only right at the bottom of the post is using smaller base images (e.g. Alpine).<p>If you can I'd recommend this as a good practice to reduce these kinds of problems. The fundamental fact is that if you don't have a library installed, you can't be affected by a vulnerability in it. So the smaller your image, the fewer possible avenues for shipping vulnerable libs you'll have and you'll have to spend less time re-building images with updated packages.
I'm looking for a base image choice and this article helped me a lot. It seems Debian base image is a good choice so far. Alpine is quite popular lately but I'm afraid musl library may cause some headaches in the future. Is Debian to go for production use? What about other alternatives like Centos?
Not for shaming purposes, but to see if there are any patterns, will you release a list of the docker images reviewed, and which have vulnerabilities?<p>Do those without vulnerabilities use a CI/CD process which results in the container being auto-updated whenever there are new releases?
Is there a way for teams with production Docker deployments to easily experiment with this kind of scanning on their own infra to understand their own situation? Maybe worth writing up a quick description of how operators can do something like that.
How do people currently scan their infrastructure to look for vulnerabilities? Do you have a dedicated team that handles this, or is security "everyone's job"?