I decided to try nanobox last Friday, after some troubles with firing up a Vagrant box on my laptop. One instance worked, another one didn't, nothing new...<p>Unfortunately I realized that to download nanobox I have to register and login and I really don't understand why. I expected to be able to download a binary, write a configuration file and build my service which I'll never run on somebody's else cloud.<p>So this is not equivalent to Vagrant or docker, which are unregistered downloads or even apt-gets. It's more like running a part of AWS locally in development, but I don't want any lockin for this project.<p>I went back to Vagrant. It turned out that a halt of the failed box followed by an up fixed the problem. I still don't feel Vagrant to be completely reliable or reproducible but I'll write my docker-compose and Dockerfiles if I want to use something else.<p>I'd love to hear from nanobox about the reasons for the required registration. Not having to support people like me that won't buy their service would be perfectly fine. I wonder if there is some technical reason that applies also to the basic scenario of firing up a service locally.
It might be worth investigating Kubernetes network policies [0] and the CIS benchmark [1] for a similar solution.<p>0 - <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow">https://kubernetes.io/docs/concepts/services-networking/netw...</a><p>1 - <a href="https://www.cisecurity.org/benchmark/kubernetes/" rel="nofollow">https://www.cisecurity.org/benchmark/kubernetes/</a>
From watching the videos on their homepage, it looks to me like the nanobox CLI is a bunch of wrappers around Docker.<p>The cloud product sounds like a custom Docker container orchestrator. Worker nodes run on your cloud provider but management is tied to a control panel on their website. They recommend using nanobox over a PaaS in their video, but I fail to see how this is anything other than a PaaS.
Brings to mind Joyent Triton (OSS) which takes the Docker API abstraction at the availability zone (DC) built on (Solaris) Zones which also benefit by Linux Kernel API (SmartOS LX Brand).
It feels like the de-facto network policy design methodology, which I am yet to see implemented in open source, is one in which CI/CD test processes observe network utilization in test environments and automatically implement restrictions for deployed instances.<p>For example, and ingress-only static content webserver would not require any outbound internet access.<p>The same approach could and should be used for other observable and manageable layers (filesystem access, syscalls, language interpreter-specific function call whitelisting, etc.).<p>I am waiting for a security-focused CI/CD tool to own this space. Even a light touch implementation would surely improve greatly on the status quo.