Your site apparently got ~2250 visits per day (so, less than two per minute) at the height of your "surge", and seems to consist of three pages (/, /about, and /open-source). Most people are only going to look at just /, so let's say 3000 pageviews. The day after was still seeing good amounts of traffic, so it wasn't some kind of momentary "all 3000 hit the site in the same minute" situation: it seems like a fairly benign decay. How could you possibly have been dealing with 287 concurrent users?<p>My website (saurik.com) is seriously written in JavaScript. I was doing this long before node was popular, and so it is designed "horribly sub-optimally": it is using Rhino, which is not known for speed. I use XSL/T as a templating engine to build the page layouts, which is also not known for speed. Every request is synchronously logged to a database. I get over 50k HTML pageviews a day, most for one recent article which I posted a few weeks ago: when I posted it, I was getting well over 3k pageviews per hour.<p>I do not do any caching: I generate each page dynamically every time it is accessed. I seriously dynamically generate the CSS every request (there are some variables). Even with 3k HTML pageviews per hour, that's less than one complex request per second. How does one even build a website that can't handle that load? That is what I'd seriously be interested in seeing: not "how do I handle being #1 on Hacker News", but "why is it that so many websites are unable to handle two requests per minute".
While the sentiment is clear, I had an uncached Wordpress site on shared hosting withstand #2 or #3 (I forget where exactly it peaked). HN isn't all that huge a traffic deliverer. It's just about the quality of that traffic.
Without wanting to seem unnecessarily rude, it's not that hard to survive #1 at HN. I had a blog post submitted by someone else hit #1 for two days. It was worth about 25k visits. I've had other stuff do 100k visits in a day on the same system during a natural disaster; that little network is basically idling at what used to be vanity numbers (millions of views per month, guys, time to list on NASDAQ!)<p>If you're on Wordpress, install WP Supercache. That's 80% of the solution, right there. Install equivalent whole-page caching for any other framework or system and tell your HTTP server how to pick it up; that should leave you prepared for hundreds of RPS.<p>We're at the stage where people are posting the equivalent of "how I survived skipping lunch". It's not 1997 any more, tens of thousands of visits is a link from a moderately popular twitter account or a medium-size metropolitan newspaper.<p>I'm sorry to seem so uncharitable. I'm just not sure what value these posts add.
You have no dynamic content and your website crash at 200~ concurrent users?<p>You're doing everything wrong then. I had a website going through 6000 concurrent users sometimes and which was hosted on a very cheap mutualised server! I didn't realize so many people had no idea about simple caching techniques.
A lot of people commented on the original submission in which he asked for feedback on his site and ... nothing's improved, not even the grammatical errors!<p>The only thing that's changed is the site's migration to S3 from Linode, and the addition of Cloudfront!
Actually you can combine middleman and dynamic pages to get fast static pages and still keep a few dynamic endpoints. We did this on our website <a href="http://hull.io" rel="nofollow">http://hull.io</a> for email registration, and blogged about it here : <a href="http://blog.hull.io/post/45912703356/the-perfect-almost-static-site-setup" rel="nofollow">http://blog.hull.io/post/45912703356/the-perfect-almost-stat...</a> - when 90% of your users only consume static content, you greatly benefit from this.
If you're not super keen on spending time on this yourself and don't want to give up any convenience of a fully dynamic site, that's part of what we built Webpop (<a href="http://www.webpop.com" rel="nofollow">http://www.webpop.com</a>) for...<p>Of course tweaking web servers and playing with your stack can be fun, but if you just want to build your site and let someone else handle the back-end performance and scaling issues, then there are solutions for that.
Another method, if you decide against a statically generated site, is microcaching [1] with nginx. Your backend only needs to render the page once each second and subsequent requests see the cached version. You should be able to easily handle 2000 req/s using this method.<p>[1] <a href="http://fennb.com/microcaching-speed-your-app-up-250x-with-no-n" rel="nofollow">http://fennb.com/microcaching-speed-your-app-up-250x-with-no...</a>
Yep, static is the way to go if that's all you need. I didn't even hit #1 on HN but got 5,000+ page views in a few hours. All on a 128MB RAM VPS:<a href="http://linuxterm.com/static-sites-for-fun-and-savings.html" rel="nofollow">http://linuxterm.com/static-sites-for-fun-and-savings.html</a>
I dont understand what the problem is with using a slow backend even for static content (just because it is easier to program). Just use varnish to cache the pages and you're good, right? Or am I missing something?
If only there was a way for young adults to learn how to be respectful and understand "time and place" for various behaviors. Oh yeah, there is, <i>parenting</i>.