TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Battle-ready Nginx – an optimization guide

221 pointsby funkensteinover 11 years ago

16 comments

BobVergover 11 years ago
What purpose of the article if in the documentation at nginx.org&#x2F;en&#x2F;docs&#x2F; you can find the same?<p>And, btw, you are giving bad advices. You are wrong here: &quot;By default, nginx sets our keep-alive timeout to 75s (in this config, we drop it down to 10s), which means, without changing the default, we can handle ~14 connections per second. Our config will allow us to handle ~102 users per second.&quot;<p>No, the keepalive connections doesn&#x27;t limit nginx anyhow. Nginx closes keepalive connections when it reaches connection limit.<p>&quot;gzip_comp_level sets the compression level on our data. These levesls can be anywhere from 1-9, 9 being the slowest but most compressed. We’ll set it to 6, which is a good middle ground.&quot;<p>No, it&#x27;s not &quot;middle ground&quot;. It kill performance of your server. With 6 you will get 5-10% better compression, but twice slowness.<p>&quot;use epoll;&quot;<p>What&#x27;s the purpose of this? The docs says: &quot;There is normally no need to specify it explicitly, because nginx will by default use the most efficient method.&quot;<p>&quot;multi_accept tells nginx to accept as many connections as possible after getting a notification about a new connection. If worker_connections is set too low, you may end up flooding your worker connections. &quot;<p>No, you have completely misunderstood this directive. It isn&#x27;t related to worker_connections at all.
评论 #6748922 未加载
评论 #6749080 未加载
tszmingover 11 years ago
For anyone who is interested in nginx tuning, please follow the H5BP nginx repo: <a href="https://github.com/h5bp/server-configs-nginx" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;h5bp&#x2F;server-configs-nginx</a>, which is very well documented already and still being maintained.
评论 #6749089 未加载
评论 #6749211 未加载
评论 #6749143 未加载
评论 #6749652 未加载
评论 #6751329 未加载
评论 #6749666 未加载
评论 #6751786 未加载
l_perrinover 11 years ago
Good introduction to nginx. However, the guide states: &quot;Keep in mind that the maximum number of clients is also limited by the number of socket connections available on your sytem (~64k)&quot;.<p>This is incorrect. The system can open ~64k connections per [src ip, dst ip] pair. In the case of a webserver listening on just 1 port, it means you can open 64k connections per remote IP, which is why some people can write about how they handle a million connections on a single server.
评论 #6749154 未加载
Volscioover 11 years ago
Also useful for nginx: adding the pagespeed module <a href="https://github.com/pagespeed/ngx_pagespeed" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pagespeed&#x2F;ngx_pagespeed</a><p>&quot;ngx_pagespeed speeds up your site and reduces page load time by automatically applying web performance best practices to pages and associated assets (CSS, JavaScript, images) without requiring you to modify your existing content or workflow.&quot;
评论 #6751537 未加载
killercupover 11 years ago
I&#x27;d like to add that using [gzip_static][1] might also be a good idea since nginx doesn&#x27;t have to gzip your files over and over again and you can gzip the files yourself with the highest compression possible (reducing file size).<p>[1]: <a href="http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html" rel="nofollow">http:&#x2F;&#x2F;nginx.org&#x2F;en&#x2F;docs&#x2F;http&#x2F;ngx_http_gzip_static_module.ht...</a>
ElongatedTowelover 11 years ago
&quot;Chances are your OS and nginx can handle more than “ulimit -a” will report, so we’ll set this high so nginx will never have an issue with “too many open files”&quot;<p>If the limit is a hard limit it doesn&#x27;t really matter what nginx decides to do, does it? I had to increase the limit by hand, outside of nginx.
rb2eover 11 years ago
I would love to see some before and after in the wild stats using this configuration. Whilst it would be an apples versus oranges comparison, it would at least show that this config works compared to the default. Maybe a Blitz.io rush test?
kbuckover 11 years ago
If you set an application to use more file descriptors than ulimit -n returns, then either the application will be smart and fix its configuration by using MAX(configured limit, ulimit -n) or it&#x27;ll start dropping requests because it&#x27;s assuming it&#x27;s allowed to open more file descriptors.<p>Increasing an application&#x27;s maximum file descriptors past ulimit -n is bad advice. The proper way is to increase the limit in &#x2F;etc&#x2F;security&#x2F;limits.conf (note that assigning a limit to * applies it to every user but root, so if you really want to assign a limit to every user, you must assign it to both * and root) and then increase the application&#x27;s max file descriptors. Restarting the application is usually required, although on newer versions of Linux, changing limits for running processes is possible.
adwfover 11 years ago
You can also use &quot;sudo service nginx reload&quot; instead of restarting. Helps if it&#x27;s in use and you don&#x27;t want to drop any active users.
bifrostover 11 years ago
My favorite comment from this whole blog: &quot;(warning, a neckbeard and an operating systems course might be needed to understand everything)&quot;<p>Thats actually true with a fair amount of what people fiddle around with. I see a lot of tuning advice based on what I can only assume is guessing. I guess this is as good of a &quot;caveat emptor&quot; as anything.
vvoyerover 11 years ago
I would love to see optimization guides with actual benchmarking.<p>It&#x27;s like saying `for(var i=..` is faster than `.forEach` without given any numbers.<p>Always test for performance, do not blindy follow guides or copy paste configuration files in your web server.
ericclemmonsover 11 years ago
I wish I could find a guide like this for Apache as well. Computing max clients and other options seems like pure guess work and constant failure =&#x2F;
评论 #6749892 未加载
sergiotapiaover 11 years ago
Thank you for this write up. Out of sheer curiosity since I love benchmark numbers, how many concurrent users do you think this config can handle?
noqqeover 11 years ago
I wish i had read these post before my devnull-as-a-Service was on HN.
sigzeroover 11 years ago
You explain the &quot;what&quot; but not the &quot;why&quot;.
calgaryengover 11 years ago
breif --&gt; brief