I think if you are spending more than $100 a month in VMs you should seriously consider co-locating if you have the skills to support it.<p>For my side-projects, personal websites and general purpose "whatever" I'm using an in-expensive colo provider(Colo@). For $50 I get 10mbit/s @ 95% (basically, burstable to 100mbps for up to 5% of a month). That's about 3 TB of data transfer which alone would cost hundreds of dollars at EC2. Of course, it is also way more data than most people would use.<p>The server I bought used on eBay for $365. Its a dual Xeon L5420 (8 hw cores) and has 24 GB of RAM. I run seven or 8 VMs under KVM on it presently. These images are pretty portable and a couple of them I backup regularly to S3; I could recover to an EC2 instance if I lost the box.<p>I monitor this with an EC2 micro instance and have not had any network outages in 6 months there. If I wanted to run a production site there I would need at least a second machine for redundancy; that would be another $30-40 a month. I'd probably also replicate real-time to a small EC2 instance so that would cost a little (though the incoming B/W to EC2 is free) - I don't do that now as I don't have real "production" data.<p>Not everyone should do this but if you like servers you should consider it. Another advantage here is that I own the server. If I get in a billing dispute or other issue with my provider they can take me off the network but they cannot hold my server hostage. Also they cannot login to the box - any attempt at social hacking is pretty well doomed.<p>On the other hand, on the two occaisions I've needed remote hands and the one time I needed a KVM they responded in less than fifteen minutes. It is mind-blowing the level of support you can get with the right provider.
I don't intend to sound harsh, but comparisons like these are absolutely useless. It's simply incorrect to make blanket statements on the pros and cons for each service without some context. The benefits and drawbacks are going to change depending on the characteristics, purpose, and needs of the application. This post makes a "one-size-fits-most" generalization, which makes it almost entirely useless.<p>What kind of application are we trying to deploy? What is your budget? What is the traffic level? Is performance a top priority? How many sysadmins do you have at your disposal, and how many are you willing to add? What kind of sensitive data are we storing/transmitting?<p>The answers to these questions drive the selection process, and end up altering the importance of each pro and con the author mentioned. Depending on your application, some pros and cons are eliminated, and new ones added.<p>Please please PLEASE, for the love of all things good, don't use an article like this as the sole basis for selecting providers. Think about what you need, ask questions, and craft your search to your purpose. Don't go pick method X because other people say it's great (for their purpose).
This issue is near the top of my list at the moment.<p>I currently spend $100/month on 4 Linodes (3 x 512MB, 1 x 1GB). I love Linode -- efficient support, and their London datacentre has been utterly rock-solid for me for several years -- but I'm beginning to think that, for me, it's the worst of both worlds.<p>On the one hand, I could move all 4 servers to a dedicated Hetzner box (EX6 or EX6S) running Xen, for a small setup fee and similar monthly cost, and get 4 or 8GB ECC memory <i>on each one</i>. This has a slightly higher sysadmin burden (5 servers to administer instead of 4, slightly higher risk of disk failure), but not that much. And the move is relatively painless, because I can directly transfer the disk images with dd over SSH.<p>On the other, I could move the services to Heroku, probably pay a bit more, and essentially stop doing any sysadmin. This is superficially attractive... but moving a load of old things to Heroku isn't straightforward, and that probably rules this option out.
As I'm looking at setting up a blog, website, and company, my inner nerd keeps nagging me: "You could build it and host it all yourself". But I know I don't need to.<p>I nearly majored in economics, and I've worked in a datacenter, so I know it's simply more efficient to depend on hosted services. Yet I still want to set up the whole stack. For me, it's a question of letting go and trusting the services that others host and others use. And it's foregoing the pride of "doing it all myself".<p>There simply isn't enough time to build <i>everything</i> from scratch -- if you build your own servers, you're sourcing HDDs and motherboards and power supplies and other components. If you make motherboards, you're sourcing copper and other raw materials. No single human is so tall as to pull copper ore from the ground, pull silicon from sand, and move vertical enough to self-produce a tablet or PC. Currently this takes several thousand humans.
Don't forget hybrid solutions. I've done things in the past with:<p>a) co-location for the main DB servers (allows you to be very specific about hardware choices: RAID cards and SSDs not just preferred manufacturer but also the exact model) and backup machines (needed higher density HDDs than could be supplied by hosting providers choice of dedicated servers)<p>b) some unmanaged dedicated servers for the core servers that don't rely upon specific hardware requirements (HTTP servers, memcached, varnish). Also easier to slowly ramp up the number of these month on month.<p>c) virtual boxes spun up when required to handle spikes in the load and then canned when it goes quieter again<p>Even better if your hosting provider provides all 3 and can arrange a private VPN between the sets of hosts so you don't get billed for your 'internal' bandwidth.
...and these kinds of issues, which I've faced myself many times, are why I'm building Uptano. The "cloud" vs "dedicated" vs "co-located" are issues that were created by the artificial separation of a few good ideas.<p>There's no reason you shouldn't be able to have dedicated hardware performance, instant deployability, on-demand usage-based billing, at costs close to, or better than, co-locating it yourself. As I'm working to prove with Uptano (<a href="https://uptano.com" rel="nofollow">https://uptano.com</a>)<p>I really think server hosting is going to look very different in a few years. We've not come very far in the past 5 years.
In my experience, Linode is the best roll-your-own you-are-on-your-own cloud provider. Obviously they are aimed at the savvy but it's reliable, cheap while being easy to estimate costs, simple to configure and expand, pretty good documentation, plus it doesn't have the learning curve or linguistic peculiarities of Amazon.<p>Regarding Rackspace, I've had good experience with them when working at mid-size and larger companies. Unfortunately I've had the opposite experience when functioning as a freelancer, working with startups, or as an entrepreneur myself. Rackspace didn't even respond to sales inquiries. Initially I figured this was a strangely repeated fluke, but other small companies and entrepreneurs I've spoken to have reported the exact same thing, where they send an inquiry to Rackspace or ask to speak with a sales engineer, and they get no response. Nothing, zip, nada. I find that very strange, and am speculating RS no longer wants to deal with the growing pains and frequent support requests of startups, but it certainly makes the decision to stick with Linode or EC2 much easier.<p>I don't have much experience with dedicated anymore, but have repeatedly heard good things about ServInt and SingleHop. Have also heard good things about Firehost for a managed cloud provider. I would love to hear others opinion and experience on any of the aforementioned companies though.
Good comparison between the 4. Rackspace has come a long way since we evaluated them a few years ago (they wanted like 24 hours to bring up a new instance/server for us back then, so we ended up going with AWS).<p>Generally speaking our biggest challenges with AWS have been storage (making TBs of web content securely available to various autoscaling clusters) and network i/o (especially across VPC/public internet boundaries).<p>We've actually found that AWS' pricing beats the costs of hosting internally, especially once you look beyond raw server cost and factor in power/cooling/manual labor/datacenter space/etc. And there are lots of different options for monitoring your usage to avoid surprises (we're looking into programmatic usage reports and New Relic for that, though we've been there a couple years now so we have a good idea what our bills are going to run each month).<p>As far as CDN, we get way better pricing from Level3 and Akamai than we could from CloudFront or Rackspace, but our traffic patterns are more 95th-percentile-friendly than most.
The issue with these comparisons is that it tends to be about VMs and storage only. A modern applications requires a lot of moving pieces. Setting up and managing, say, a queue service, has costs associated with it where something like SQS becomes a serious value add.
I totally dislike comparisons of dedicated hosting versus cloud. Especially when they dont factor in any of the costs of the support contract, hardware replacements, etc. involved in supporting physical hardware.<p>He also mentions that there is no way to see what your next bill will be in AWS. They offer an 'Account Activity' link, that shows you current charges in the current month. That can be helpful when testing things.<p>I hope people that are new to setting up infrastructure and supporting it do no use comparisons like this to make the decision for them. There are far too many variables not discussed in this article for this to be very valuable to anyone.
A good balance I found is to have a dedicated server with a stand by AMI in the cloud and switch over using DNS.<p>What you pay for in the cloud is convenience and not performance.
I've got a con for the Rackspace list, that in some was conflicts with one of it's pros. Pricing is simple because of the small choices for instance performance. I would love to have more choice in instance performance beyond memory based tiers. I'd kill for a c1.medium analogy on Rackspace.<p>With that being said, I'm a loyal Rackspace customer and love their cloud offering.
There's a very simple formula for figuring out whether self-hosting or cloud hosting makes more sense.<p>Add a month's worth of colocation fees, capital depreciation and associated labor costs. If it's less than your monthly cloud hosting bill, then it's time to self-host.<p>And if you run your own firm and haven't figured out how to calculate capital depreciation yet, it's time to learn. :)
how much cheaper is Rackspace vs Amazon CloudFront? From our experience, Amazon also has more nodes that its CDN pushes files to and our CDN cost with 100k+ views a month is still under $3/month, for each solution.
>Of course there would be the point where I would need help from people who are specialized in database design / sharding / partitioning, etc - likely earlier than going the cloud hosting route<p>Where does this misconception come from? That is the exact opposite of reality. With the "cloud" route, you are limited to absurdly inadequate servers, which is a large part of what drove the "nosql" fad, you need to shard if you are on EC2, because they offered nothing with reasonable IO. Even now they have an SSD option, but it is a single crappy SSD, and barely any RAM. With the dedicated route, you can do a 512GB RAM, 24 SSD array server and not have to worry about sharding until you are in the top50 sites on the web.