Hi everyone, Docker maintainer here. Here's my list of docker hosting services. Please correct me if I forgot one! I expect this list to get much, much longer in the next couple months.<p>* <a href="http://baremetal.io" rel="nofollow">http://baremetal.io</a><p>* <a href="http://digitalocean.com" rel="nofollow">http://digitalocean.com</a> (not docker-specific but they have a great docker image)<p>* <a href="http://orchardup.com" rel="nofollow">http://orchardup.com</a><p>* <a href="http://rackspace.com" rel="nofollow">http://rackspace.com</a> (not docker-specific but they have a great docker image)<p>* <a href="http://stackdock.com" rel="nofollow">http://stackdock.com</a><p>EDIT: sorted alphabetically to keep everyone happy :)
Sounds like you took the best parts of digital ocean and are trying to push it as a platform with docker baked in. I like. It seems like you're also trying to simplify using docker. I like even more.
I love the fact that you keep trying to define your own vocubulary 'Deck' etc, but always have to explain it. Best to stick with the more eaily understood term, rather than invent your own, I think.<p>Unless you're going to try and trademark them all.
I like the idea. Really cool. I've been researching docker a lot lately, and did most of my recent development on Core OS. I do have a question that wasn't immediately obvious: Docker maintains that one should make a container out of every application so that instead of having to install apache + mysql + php in one Ubuntu environment, you'd create three docker containers (apache, mysql, memcache) and run them together and define the share settings, etc. Now here's my question: It seems as if on Stardock, every container would be a seperate (at least) $5 instance? So if I want to run apache + mysql + memcached I'd need to cram them all into one docker container in order to have them on one machine? Or is it possible to use a $5 stardock system and run multiple containers on them, like on Core OS?<p>Thanks!
"Docker-as-a-Service", simple, easy-to-understand pricing. Love it.<p>This is my favourite Docker offer so far. I've been looking for something to replace dotCloud's deprecated sandbox tier for just playing around, and it looks like this fits the bill.
This is truly awesome, nice work!<p>I configured and launched a machine with redis and node in less than 5 minutes. Very cool.<p>How will you isolate instances from each other? My instance appears to have 24 GB of RAM and 12 cores, and it looks like I can use all of it in my instance.
One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.<p>So say I have a fancy Django image, and a fancy Postgres image.<p>How do I then have the Django one learn of the Postgres one's IP, and then auths (somehow), and then communicates seperately.<p>Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained), and how does that even work with a DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do this?<p>Do service registration/discovery things for Docker already exist?
We're doing a similar thing called Orchard:<p><a href="https://orchardup.com" rel="nofollow">https://orchardup.com</a><p>We give you a standard Docker instance in the cloud - all the tools work exactly the same as they do locally. You can even instantly open a remote bash shell, like the now-famous Docker demo!
The big point of Docker for me is that I can build the container on my machine, run automated tests on it, play with it and then ship it to the production machines when I'm confident that it is working.<p>If you build the container on a service like this testing it is hard or in some cases even impossible. For example acceptance tests with Selenium.<p>Gemfile.lock and similar version binding tools help, but prebuild containers bring the deployment stability to whole new level and is the reason why I'm exited about Docker and containers in general.<p>Do they support prebuild containers?
What would be even better is to decouple the idea of a drop from the containers running it. What I like about container approaches is having "machines" I can run them on. So let's say I make a "www" drop or several. I should then be able to fire up my containers into particular types of drops and have them started on those without having to think about the specifics. The benefit of this I'd that I only care about my container running and having some basic resource requirements and not so much the specific machine instance it is running on. I could even co-mingle different containers on types of "machines". Also separating out disk resources from CPU and ram would be good. Maybe you do this already buy it wasn't clear to me.
Great initiative! One thing to be aware is that Docker is using LXC for containers and LXC relies on kernel isolation and cgroup limits. The concern is about the vulnerabilities.<p>It is comforting that Heroku is also using LXC for dynos. Would be interesting to know how much in-house adjustments to the kernel and LXC has been made to ensure the hardening.
Just curious, how are people building Docker images these days? Doesn't it only run on 64-bit Linux? I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.<p>Are people running Linux VMs on their Macs to build containers?<p>I like the idea of this service. But both the client side and the server side have to be easy. Unless I'm missing something it seems like they made the server side really easy, but the client side is still annoying.
I love this idea, and want to try it but I have no experience with Docker (on the todo list).<p>I wanted to spin up an instance of Sphinx Search but no idea how to go about doing it.<p>Maybe creating a set of tutorials will help with this. I can think of two advantages. The first being customers like myself will love it. Second, similar to Linode and their tutorials it will drives a lot of traffic and establishes your reputation as docker experts. Will probably build a lot of back-links too as people link to your tutorials.
How is private networking handled between Docker containers?<p>UPDATE: I'd also be interested to hear about Digital Ocean-style "shared" (but non-private) networking—basically, any network adaptor with a non-Internet routable IP address. ;)
Not being familiar with the subject basically it seems that:<p>Docker is a simple description of an internet server including the various services required (mysql, httpd, sshd, etc. - the bundle being call a <i>deck</i>).<p>It seems then you can create a server elsewhere (eg on your localhost), generate the docker description of that and use that description to fire up a server (either a VM or dedicated) using the service in the OP.<p>Am I close?<p>Could I use this to do general web hosting?<p>Edit: and looking at digitalocean.com it appears I can activate and deactivate the "server" at will, so I can have it online for an hour for testing and pay < 1¢?
This looks awesome! I currently have an AWS box for the same purpose, running a few of my docker containers. Will this support the ADD directive, or the ability to add custom files (config files) into containers?
Wonder if they have an idle/spin up time. Only their one instance plan is $5, but I know I have to buy more than one on heroku to get no idle/spin up time - that or use hacks like constant pingers, etc.. This is important for when I'm doing experiments/UI tests/alpha tests/submitting apps for reviews before they have any consistent traffic, but I don't want them to occasionally get stuck on 15 second spin up times on requests.
Looks cool. Here's what I'd love to see: built-in git deployment (ie. take a Dockerfile, build an image from it, and then after a push add the latest source code to /app and start new instances), and some kind of orchestration so you could run a number of app containers behind a load balancer container.
Hmm StackDock.com is hosted on a server at Hetzner in Germany.<p>I don't 100% know if the containers themselves are hosted by Hetzner or not, but Hetzner is more of a budget provider than something you host production sites on.<p>I've heard many mixed review about their network and mostly their support which isn't up to scratch. We'll see what happens but from what I see, if someone decides to abuse the service, Hetzner might just take down the whole server without warning just like OVH do.<p><a href="http://www.hetzner.de/en/hosting/produkte_rootserver/px120ssd" rel="nofollow">http://www.hetzner.de/en/hosting/produkte_rootserver/px120ss...</a> (I'm guessing they are using something similar to this).It's a pretty powerful and cheap server but if you search hard enough you can find something equivalent in the States for around the same price.
I Love the idea! really. I just don't like all the UX yet. Some things feel ... off. It might be something personal. I'm not sure. But I guess it's interesting to discuss. "Drops are distilled Decks" The words feel semantically mismatched for some reason. If I think "Deck"I don't think "Config". If I think "Drop" I don'think "Deployable stuff" and I don't see how a "Distilled deck" is a "drop". Also it feels odd that I can create a "New deck" in the "instances" section.<p>though adding "cards" to a "deck" sounds intuitive.<p>I'm trying to come up with better terminology. something with ships and containers...
IMO Labels/tooltips should be added to the icons for the cards. Some of them, including the leaf (nodejs?) and the tree (nfi what that is) aren't especially obvious.<p>Otherwise, cool!
You should do some A/B tests to confirm, but I but the pricing table at the bottom was a little confusing because the price was not highlighted in any way, and the call to action was round when it is typically a rectangle.
Looks awesome! Anyone know if there are bandwidth / throughput / transfer charges?<p>Also, forgive my ignorance, but what would it take to be able to "add containers" in the same way that you can add dynos on Heroku?
The issue with linux containers is (or at least it used to be) that it is possible for a malicious user to 'break out' of the container. Has this problem been solved?