What purpose of the article if in the documentation at nginx.org/en/docs/ you can find the same?<p>And, btw, you are giving bad advices. You are wrong here:
"By default, nginx sets our keep-alive timeout to 75s (in this config, we drop it down to 10s), which means, without changing the default, we can handle ~14 connections per second. Our config will allow us to handle ~102 users per second."<p>No, the keepalive connections doesn't limit nginx anyhow. Nginx closes keepalive connections when it reaches connection limit.<p>"gzip_comp_level sets the compression level on our data. These levesls can be anywhere from 1-9, 9 being the slowest but most compressed. We’ll set it to 6, which is a good middle ground."<p>No, it's not "middle ground". It kill performance of your server. With 6 you will get 5-10% better compression, but twice slowness.<p>"use epoll;"<p>What's the purpose of this? The docs says: "There is normally no need to specify it explicitly, because nginx will by default use the most efficient method."<p>"multi_accept tells nginx to accept as many connections as possible after getting a notification about a new connection. If worker_connections is set too low, you may end up flooding your worker connections. "<p>No, you have completely misunderstood this directive. It isn't related to worker_connections at all.
For anyone who is interested in nginx tuning, please follow the H5BP nginx repo: <a href="https://github.com/h5bp/server-configs-nginx" rel="nofollow">https://github.com/h5bp/server-configs-nginx</a>, which is very well documented already and still being maintained.
Good introduction to nginx. However, the guide states: "Keep in mind that the maximum number of clients is also limited by the number of socket connections available on your sytem (~64k)".<p>This is incorrect. The system can open ~64k connections per [src ip, dst ip] pair. In the case of a webserver listening on just 1 port, it means you can open 64k connections per remote IP, which is why some people can write about how they handle a million connections on a single server.
Also useful for nginx: adding the pagespeed module <a href="https://github.com/pagespeed/ngx_pagespeed" rel="nofollow">https://github.com/pagespeed/ngx_pagespeed</a><p>"ngx_pagespeed speeds up your site and reduces page load time by automatically applying web performance best practices to pages and associated assets (CSS, JavaScript, images) without requiring you to modify your existing content or workflow."
I'd like to add that using [gzip_static][1] might also be a good idea since nginx doesn't have to gzip your files over and over again and you can gzip the files yourself with the highest compression possible (reducing file size).<p>[1]: <a href="http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html" rel="nofollow">http://nginx.org/en/docs/http/ngx_http_gzip_static_module.ht...</a>
"Chances are your OS and nginx can handle more than “ulimit -a” will report, so we’ll set this high so nginx will never have an issue with “too many open files”"<p>If the limit is a hard limit it doesn't really matter what nginx decides to do, does it? I had to increase the limit by hand, outside of nginx.
I would love to see some before and after in the wild stats using this configuration. Whilst it would be an apples versus oranges comparison, it would at least show that this config works compared to the default. Maybe a Blitz.io rush test?
If you set an application to use more file descriptors than ulimit -n returns, then either the application will be smart and fix its configuration by using MAX(configured limit, ulimit -n) or it'll start dropping requests because it's assuming it's allowed to open more file descriptors.<p>Increasing an application's maximum file descriptors past ulimit -n is bad advice. The proper way is to increase the limit in /etc/security/limits.conf (note that assigning a limit to * applies it to every user but root, so if you really want to assign a limit to every user, you must assign it to both * and root) and then increase the application's max file descriptors. Restarting the application is usually required, although on newer versions of Linux, changing limits for running processes is possible.
You can also use "sudo service nginx reload" instead of restarting. Helps if it's in use and you don't want to drop any active users.
My favorite comment from this whole blog:
"(warning, a neckbeard and an operating systems course might be needed to understand everything)"<p>Thats actually true with a fair amount of what people fiddle around with. I see a lot of tuning advice based on what I can only assume is guessing. I guess this is as good of a "caveat emptor" as anything.
I would love to see optimization guides with actual benchmarking.<p>It's like saying `for(var i=..` is faster than `.forEach` without given any numbers.<p>Always test for performance, do not blindy follow guides or copy paste configuration files in your web server.
I wish I could find a guide like this for Apache as well. Computing max clients and other options seems like pure guess work and constant failure =/