Every time I see a HN story about "how i made my site handle lots of traffic" it's people re-learning the same lessons. We really need a basics-of-building-big-webapps FAQ.<p>Step 1: Make your site as static as possible, or at least relying as little as possible on server-side processing. Pretend your app is made of Java and thus assume it will hog memory, be slow and crash every other day, so plan to handle those kind of 'reliability' issues.<p>Step 2: Get a CDN or a buncha cloud-hosted servers (like, 100). You may need 5-10 of them to serve static content using a caching web server solution ala Varnish (or just Apache proxying/caching; yes, it works fine), and 50-100 for application and database processing.<p>Step 3: Make sure as little of your site as humanly possible makes use of database calls, and for gods sake try not to write too much.<p>Step 4: Use a lightweight (I SAID LIGHTWEIGHT!) key/value memory store as a cache for database elements and other items you might normally [erroneously] get from disk or network or database.<p>Step 5: Don't rely solely on 'cloud' resources. Eventually you will get bitten because they're not designed to scale infinitely and probably do not care about you (especially if you don't pay for them or pay very little).<p>Step 6: (optional) Re-create this solution on a different hoster in a different datacenter on the opposite coast of your country. Not only will you end up with a DR site, you'll see load and latency improvements for many users. How to effectively and cheaply replicate content between datacenters cross-country is left as a lifetime career goal for the reader.