I'm really looking forward to seeing the scientific community adopt docker as a way to distribute reproducible research and coursework.<p>MIT 6.S094 has a Dockerfile[^1] that contains all the software required for taking part in the class. This is a huge boon for getting stuck into the class and its coursework.<p>[^1]: <a href="http://selfdrivingcars.mit.edu/files/Dockerfile" rel="nofollow">http://selfdrivingcars.mit.edu/files/Dockerfile</a>
I'm just a guy that wants to deploy web apps. Is docker overkill for me? Basically, I want to be able to test something on my local machine under the same conditions it will be running on my server. Containerisation seems like the only way to do this that doesn't involve keeping packages and system configurations in sync in two or more systems.
As much as I welcome the CLI cleanup, I can't stop thinking that the 'docker ps -> docker container ls' change makes no sense to anyone who has any experience with bsd/unix/linux systems. Seriously, why?
Looks like there's a mistake about image pruning:<p>"Add -f to get rid of all unused images (ones with no containers running them)."<p>But the option is actually `-a` -- `-f` just simply skips the prompt.
Prune seems not that well thought to me. Don't get me wrong, I do find it useful but many people use containers as environments. Think about how many people are going to run prune only to find their work go missing.<p>If you are gonna add a nuclear button, do it with a big red alert and give the option to whitelist some containers.
Curious what methods others use for handling secrets at build time (using docker-compose). I'm currently installing (private) dependencies at runtime by mounting my secrets as a volume. I couldn't find a method that didn't seem to have some risk of inadvertently exposing them.
Why not one 'prune' command with 'containers', 'images', ... as an argument / subcommand?<p>Would have seemed more intuitive to me.