Btw there is a new feature in the kernel to help avoid using the accept shared memory mutex -- EPOLLEXCLUSIVE and EPOLLROUNDROBIN<p>This should round robin accept in the kernel, and not wake up all the epoll listeners.<p><a href="https://lwn.net/Articles/632590/" rel="nofollow">https://lwn.net/Articles/632590/</a>
This is a lovely article, but:<p>> The fundamental basis of any Unix application is the thread or process. (From the Linux OS perspective, threads and processes are mostly identical; the major difference is the degree to which they share memory.)<p>It's better to be specific in performance discussions, rather than use 'thread' and 'process' interchangeably.<p>As well as the article mentioned about memory sharing, threads (which are called Lightweight Processes, or LWPs, in Linux 'ps') are granular.<p><pre><code> ps -eLf
</code></pre>
NWLP in the command above is 'number of lightweight processes', ie number of threads.<p>Processes are not granular: they're one or many threads. IIRC it can be beneficial to assign threads of the same process to the same physical core or same die for cache affinity. There's all kind of performance stuff where 'threads' and 'processes' do not mean the same thing. Being specific is rad.
> You can reload configuration multiple times per second (and many NGINX users do exactly that)<p>I thought this was an interesting remark. Can anyone clue me in to what these "many users" might be doing, that requires them to reload configuration so frequently?
"NGINX’s binary upgrade process achieves the holy grail of high-availability; you can upgrade the software on the fly, without any dropped connections, downtime or interruption in service."<p>Is this really true? I remember seeing an article[1] recently on using an iptables hack to prevent dropping connections when reloading haproxy. Does nginx actually provide zero-downtime configuration reloads?<p>[1] <a href="https://medium.com/@Drew_Stokes/actual-zero-downtime-with-haproxy-18318578fde6" rel="nofollow">https://medium.com/@Drew_Stokes/actual-zero-downtime-with-ha...</a>
One thread per CPU, and non-blocking I/O, that's sounds like the usual way to approach the problem. I'm surprised it uses state machines to handle the non-blocking I/O, because modern software engineering provides much more pleasant approaches such as using coroutines.
I did as they said at the bottom and gave them my e-mail and other personal details so I could download the eBook that they were giving free preview copies of - "Building Microservices". Unfortunately, they sent link to PDF only so it's not usable to me. Just a heads up to others so you save yourself the time of discovering that. (I'll just wait for when the book is finished and then I'll buy it so I get ePub. I like O'Reilly and have bought many books there before.)
I had a feeling I've read about nginx before: <a href="http://aosabook.org/en/nginx.html" rel="nofollow">http://aosabook.org/en/nginx.html</a><p>The whole book is worth a read, although I found some sections painfully boring (perhaps my limited attention span is to blame).
Interesting overview. I wish they had some data comparison which could explain the significance and efficiency of this approach vs other/old approaches.
They forgot to mention pool-allocated buffers, zero-copy strings, and very clean, layered codebase - every syscall was counted.<p>The original nginx is a rare example of what is the best in software engineering - deep understanding of principles and almost Asperger's attention to details (which is obviously good). Its success is justified.