i don't really see the point of these micro-benchmarking articles at all.<p>so what if nginx can serve a theoretically higher number of static files/second more than something else. are you actually serving that much traffic with no headroom in terms of extra servers and load balancers? do microseconds of computational time per request really matter when your outbound packets can get delayed by milliseconds in the network or dropped completely?<p>there are plenty of reasons to like one server over another, but is .0000000001 seconds/request overhead really one of them? http servers can have wildly different behaviors regarding HTTP streaming, working models, extensions, etc. how about the fact that varnish is a caching proxy that doesn't really replace something like nginx, lighttpd, apache?<p>he's also backing varnish with a ramdisk that takes 25% of his memory (for a 100b file, no less!) when comparing it to the others. probably not the best designed test out there.<p>> Again, keep in mind that this benchmark compares only the servers locally (no networking is involved), and therefore the results might be misleading.<p>i don't know why anyone would publish "misleading" benchmarks<p>i know it's less fun and there are no numbers involved, but what about a real rundown of some of the subtle differences between the servers and some of their more unique features (besides async/threaded)? that's something i would find useful reading, but i guess it's not as easy as firing up ab.
Old post, as a side note, i've performed some tests with nginx and his configuration 1-2 weeks ago on linode and the results on the smallest linode were nearly 10-15% less than what the author report in his post (quite good imo).<p>If someone with a less optimized configuration is wondering what in his test configuration allows him to obtain those results, here is a brief recap:<p>1- Tests performed with ab with keepalive enabled on both the client and server<p>2- open_file_cache or similar options: this enable file caching, so basically the server is no more i/o bound<p>3- Furthermore, enabling tcp nodelay (that disables nagle alg, usefull when we have small tcp responses) and disabling access logging (this depends on how logging is implemented, if non-blocking and on a separate thread (not a worker) disabling it doesn't improve the results) could help a bit.<p>Being a cpu-bound test, having the client on a separate machine would have likely increased the results but i doubt it would have changed the performance ratio among them, after all in every test we had the same client with the same overhead.
previously: <a href="http://news.ycombinator.com/item?id=2629631" rel="nofollow">http://news.ycombinator.com/item?id=2629631</a><p>Also, "The client as well as the web server tested are hosted on the same computer", which is pretty poor design, to be honest.
I think benchmarks like this are very harmful. How many small static files you can serve per second is just one (not very important) criteria when choosing one of these servers.<p>I think more important criteria are:<p>1. Stability. How often are you woken up in the middle of the night because your web server is shitting the bed.<p>2. Configuration. Can you configure it to do all the things you will need it to do? Have others who have come before you been happy with it throughout the entire life of their product, or have they outgrown it?<p>3. Simplicity. Can you set it up to run efficiently without weeks of study on how this server is properly deployed? Is it easy to mess up the configuration and take your site down when making a change?<p>4. Generality. Are you going to need something else to sit in front of your dynamic pages, if you require them? This is also a factor in stability, if you have 2 server solutions, all else being held constant, that is twice as likely to break down or get broken during a configuration change as just one. Actually, it is much more than twice as likely, since you are spreading your competency to learn the ins and outs of 2 pieces of software, so you are less capable on each than you would have been if you just had one server solution to worry about.<p>So, given all this, my advice to anyone trying to make an initial decision on what webserver to use is: (Apache|nginx) (pick one only) should be your default until you believe you have a compelling <i>reason</i> to use something else. Both are capable of doing more or less everything you need, have lots of extensions, are widely used, and have comprehensible configuration. Once you have mastered whatever one you use, you will be able to tune it, debug performance problems, and spend the minimum possible amount of time doing server configuration and testing, and maximum time implementing features and supporting customers.
I've tested and admired the performance of G-WAN, but the closed-source nature of the project may be a bit of a showstopper for some. Development appears to be narrowed to Debian derivatives, making successful installation of the binary on other Linux/UNIX platforms challenging. It would be nice to be able to inspect and modify the source in order to optimize and compile it for the desired platform.
These results are valid if your static content all fit in memory. I would expect interesting diversions in performance if a certain proportion of requests would have to hit the file system.<p>Also an interesting piece of noise missing is slow clients holding on to the connection. If you're serving up multi-megabyte files, I would guess this could become a major factor.
Off topic, but WordPress has made blogs unreadable for iPad users. I can't even scroll through this article without the screen jumping erratically past many pages of content.