If you are just starting, you should have the simplest setup - everything on one server - and scale it only when it becomes necessary. Premature scalability adds complexity and slows down your iterations.<p>My setups usually consist of an nginx serving static content and proxying applications requests (doing gzip, etc). The data tier is initially collapsed into the application as described in <a href="http://www.underengineering.com/2014/05/22/DIY-NoSql/" rel="nofollow">http://www.underengineering.com/2014/05/22/DIY-NoSql/</a> This architecture allows very fast iterations while providing enough performance headroom; it can serve 10k simple (CRUD) http requests per second on a single core.
The one thing I really want from Digital Ocean is a guide that carefully explains how to set up the "private network" piece of the equation.<p>The "orange box" that represents the private network in each of the examples is taken for granted, but for someone coming from an application development perspective that piece isn't trivial to make. EC2 Security groups make that sort of box incredibly easy to make, but DO doesn't have anything like that.
I really enjoy the community-driven articles/tutorials that DigitalOcean provides. They have documentation for a lot of processes that are not readily documented or still emerging.
I am hosting all of my stuff on a single VPS instance in Docker/lcx containers. It is reasonably easy to migrate stuff out if I need a larger hardware, but it's also very cheap.<p>Regarding scaling: a couple of years ago I ran a database on a single CPU core (because of licensing issues). It stored 50M rows a day and also executed various queries quite quickly. So I seriously doubt that most of us is going to need large clusters.
my current setup on DO, I would like some inputs.<p>website hosted on 1 droplet. additional 1 droplet per every customer is deployed through Stripe and DO api.<p>DO let's you save a snapshot and load it to the droplet. I have a snapshot that is basically a copy of my 'software'. It's a LAMP stack with init script to load the webapp from git repo.<p>Customer logs in at username.mywebapp.com<p>The beauty of this is that I never have to worry about things breaking or becoming a bottle neck. if one customer outgrows themselves, they won't affect other resources. It has linear scalability, new customers, add a new droplet. I don't need to worry about writing crazy deployment scripts although I use paramiko to ssh in to each server when I need to get dirty.<p>The main website is mostly static content. I could host it even on Amazon S3 but currently using cloudflare.<p>Updating the product code requires me to restart the droplet instance. However, I test things out on another staging droplet. Once things work on there, I use the DO api to iterate through all the customer droplets and do a restart.
This is awesome! Great content for DigitalOcean to be pushing out as I am probably the exact audience they are looking for when they published this. E.g. I've never gone beyond a shared hosting setup but have been curious to try my luck at learning more of the stack by using the DO platform.
The effort D.O. puts into their community education is one of my favorite things about them. The few times I've had problems with a droplet configuration, inevitably someone had already posted a solution in the help section.
Wouldn't it be much better to tech the concept of horizontal scalability applied to the application stack? Your server is a stack of interfaces: a frontend cache, a static content server, a dynamic content server and a database. You can horizontally scale each stack layer. Much simpler, applicable to different scenarios.<p>However, this approach won't give you a viral article title like "eight server setups for your app" (replace eight by 2^n where n is the layer count).
Excellent writeup! Next I'd like to see an article on deployment. What if I want my development team to be able to push code changes regularly to an app cluster via a git-based workflow and have these deploys all occur with zero downtime ? I think that an article which demonstrates how to use modern deployment tools such as ansible or docker to achieve those goals on a commonly used programming environment such as Ruby would serve to lure quite a few developers away from PaaS towards something like Digital Ocean.<p>For now though, those tasks are still "hard" which means that for many developers digital ocean is still hard to use relative to other emerging platforms such as Redhat's Openshift or Heroku. I know there are many shops who would love to jump ship from IaaS to a less expensive platform but they feel the cost of rolling their own zero-downtime clustered deployment infrastructure is not worth the $ savings.<p>I suspect that if IaaS providers were to dedicate resources towards producing more educational material for developers with the aim of demonstrating how to achieve these deployment objectives on all the popular platforms using modern open source tools then loads of PaaS developers would jump ship.<p>For example: How can I use ansible to instantiate 5 new droplets and automatically install a load balancing server on one of them while setting up the Ruby on Rails platform, and ganglia on the remaining ? How can I run a load balancing test suite against the newly created cluster, interpret the results, and then tear the whole thing back down again all with a few keystrokes ? How could this same script allow me to add additional nodes and how does the resulting system allow for the deployment of fresh application code ? How can it be improved to handle logging and backup ?<p>I know that it's possible to create a deployment system to answer the above questions in less than a few hundred lines of ansible + Ruby, so I imagine it could be explained in a short series of blog posts, but you would probably need to hire a well-paid dev-ops guru to produce such documentation. I bet if you ask around on HN...<p>p.s. keep an eye on these:<p><a href="http://deis.io/" rel="nofollow">http://deis.io/</a>
<a href="https://flynn.io/" rel="nofollow">https://flynn.io/</a><p>^ If either of these become production quality software it could be a game changer for Digital Ocean.
Thanks for the write up. It's the perfect time for me to be reminded about starting simple and changing the architecture as needed. I have prematurely optimized on one project in the past. It was painful. And after all that pain the mythical millions of unique visits never arrived.
Virtually no mention of how the different server setups affect availability - this is very unfortunate. Availability (not to mention disaster recovery) are two things which I think are significantly more important than scaling, and your choice of server setup will affect both.
As the "Startup Standards" begin to take shape, these guides prove to be extremely useful for the newcomers out there. Sure in 6-12 months it may become a bit dated (depending on the guide) but if kept up-to-date, they can be a powerful tool for a new company.
It would be very helpful if DigitalOcean sells load balancer too as Linode, because the bandwith limits are for each Droplet which makes it very illogical to use DigitalOcean. Of course, we can use Cloudflare or similar, but still It is a need.
does anyone know what a bare minimum monitoring setup for a single server having nginx, postgres and rails ? I'm far too intimidated by nagios to do anything significant.
I propose an alteration to the typical LAMP stack: Replace Apache with Nginx and MySQL with MongoDB. Personally, the reduced resource use of Nginx is nice since I can run on a smaller "box". MongoDB is just a choice depending on the data set, but it does allow for sharding out horizontally without too much effort.