This is nice but it's characteristic of most other Docker tutorials which say "just run this command" and don't bother to go into how or why it works. As a result, all the reader can do is copy and paste (if the command even works in the first place or the tutorial isn't out dated). What I think most people need to know is how they can do this but modify the app to work for their purposes instead of just "hello world".
The title should be corrected to read "How to set up and deploy a 1000 node botnet". There's no mention of securing the hosts or the swarm what-so-ever.
This is clearly a doc based on a hypothetical assumption that he can deploy 1000 nodes. If thats the case, why not go for a cool million?<p>Just a suggestion to the OP, it not hard to setup and share a 5 node vagrant cluster on your laptop. Give concrete examples that people can run locally and test your assertions themselves. Once that foundation is laid, you can extrapolate to 10 nodes, 100 nodes, 1000 nodes.<p>Anyone that has deployed a cluster of that size knows that the article is missing a bunch of items, not limited to the following:
- Overhead Instances (manager, service discorvery, loggging, etc.)
- Configuration Management
- Security Implications
- Monitoring
- Failure mitigation (its going to happen at that scale)
- Update strategy at this scale<p>For those that are interested, one official doc and a good place to start when leaning how to deploy a large docker 1.12 cluster is this guide by docker.<p><a href="https://docs.docker.com/swarm/plan-for-production/" rel="nofollow">https://docs.docker.com/swarm/plan-for-production/</a>
"replacing 3 in wherever you see 1000 in this post is probably a good idea"<p>...which really means "I have no clue where this will break when scaling".<p>Cute, but not terribly insightful, and possibly risky in an age where following recipes off the Internet is too often the first step towards production :)
Is the definition of bare-metal changing? It seems clear from the context that virtual machines are being used, but is that a distinction that those further up the stack don't worry about now?
basically this is how the howto handles the hardest part:<p>>Basically you will run docker swarm init on the first
node and then docker swarm join on all the other nodes.
There are a few other arguments that you’ll need to add
to those commands but if you follow the docs you’ll have
no problems at all.<p>worse part of the setup is how to build the cluster nodes store in a way which is redundant and reliable, since provisioning it for HA is largely undocumented and left to an exercise for the reader
Meh, the article doesn't really bring anything new to the table.<p>If you are doing something like this, please keep in mind that this kind of DNS failover is, at best, unreliable. You have no control of how DNS is being cached on client side, and whether the client is going to switch to the next IP in the cycle if the previous one is unavailable.
Proper way to do HA would be to use some kind of VIP + load balancer combination (eg, keepalived + HAProxy), which would allow you to failover the IP instead to just rely on the hostname. However if you also have a database backend to think about, then u will most likely need something like Pacemaker to ensure you don't end up with data inconsistency (brain split scenario).
how ironic that now there is this article as first on hn <a href="https://circleci.com/blog/its-the-future" rel="nofollow">https://circleci.com/blog/its-the-future</a> :)