TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Fast Servers

146 pointsby mr_tyzicover 9 years ago

18 comments

someone13over 9 years ago
Okay, this is a really cool post, but I have a small bit of criticism - the code samples are <i>really hard</i> to read. I&#x27;d recommend, at minimum, adding a bit more whitespace so you don&#x27;t end up with lines like this:<p><pre><code> if(e[i].events&amp;(EPOLLRDHUP|EPOLLHUP))close(e[i].data.fd); </code></pre> Despite that minor criticism - pretty cool stuff!
评论 #10874770 未加载
评论 #10873454 未加载
评论 #10874627 未加载
Matthias247over 9 years ago
Can&#x27;t completly understand the main message of what is proposed here.<p>Using multiple threads were one thread accepts connections and others process the connections is already quite standard (e.g. look on how Netty for Java works with a boss thread and worker threads).<p>However the pattern won&#x27;t work with blocking IO like it&#x27;s suggested in the referred page if your worker thread should handle multiple connections. Even if poll tells you the connection is readable there might not be enough data for a complete request - so you need a state machine for reading again. Or you block until you have read a complete request and thereby block other connections that should be served by the same thread(pool). And if you block on writing responses then one slow connection will block the processing of the others.<p>What also should be considered is that by far not all network protocols follow a pure request -&gt; response model. If the server may send responses asynchronously (out of order) or if there is multicast&#x2F;broadcast support in the protocol the requirements on software architecture look different.
markpapadakisover 9 years ago
There is nothing fundamentally new described in the blog post, although pinning threads to core is almost never the default operation mode (though many popular servers expose an option for turning this one).<p>As someone else stated in another post, SO_REUSEPORT and accept4() is best, all things considered, way to accept connections across multiple threads. Again, most modern servers support this by default, if supported by the underlying operating system (e.g nginx).<p>By the way, Aerospike accepts new connections in a single thread and then directly adds the socket FD to one of the i&#x2F;o threads (round-robbing selection scheme) directly using e.g epoll_ctl(fd, EPOLL_CTL_ADD, ..).<p>See <a href="http:&#x2F;&#x2F;blog.tsunanet.net&#x2F;2010&#x2F;11&#x2F;how-long-does-it-take-to-make-context.html" rel="nofollow">http:&#x2F;&#x2F;blog.tsunanet.net&#x2F;2010&#x2F;11&#x2F;how-long-does-it-take-to-ma...</a> for costs of context switching, and performance improvement when pinning threads to cores (it&#x27;s quite impressive). Also, note that according to the author, on average, a context switch is 2.5-3x more expensive when using virtualization.<p>You may also want to read <a href="https:&#x2F;&#x2F;medium.com&#x2F;software-development-2&#x2F;high-performance-services-using-coroutines-ac8e9f54d727#.ho0s7q28b" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;software-development-2&#x2F;high-performance-s...</a> -- it&#x27;s been a long time since I wrote this, but it describes how one can deal with asynchronous I&#x2F;O and other operations that may block a thread.
评论 #10874654 未加载
halayliover 9 years ago
This is not an ideal design, surprised it has this many upvotes.<p>Transferring the accepted connections this way involves an extra system call to epoll&#x2F;kqueue to add the event into the particular thread and the accepting thread can become a bottleneck under high load.<p>A better design would be to share the listening socket across threads and have each thread accept at its own pace, at least this avoids the additional kqueue&#x2F;epoll system call needed to add the new fd into the thread&#x27;s poller, but it does cause lock contention in the OS which is still less expensive than a system call. What&#x27;s even better is if you&#x27;re on a newer linux version, or bsd, consider using SO_REUSEPORT which allow each thread to bind&#x2F;accept on the same port and avoids the lock contention issue.<p>Also you should consider using accept4() to set the non-blocking flag during accept instead of the additional system call to set it to non-blocking.
评论 #10874616 未加载
wittrockover 9 years ago
Backpressure between threads and utilization could be hard here. Balance between the speed of the accept, request, and worker threads is something I&#x27;m curious about. In theory, you could create pools of each if you find that one set bottlenecks the other. Also, workload isolation is important--I&#x27;m curious how the author deals with (or avoids) transferring ownership of large amounts of memory and cache between cores without incurring significant transfer overhead.
评论 #10872642 未加载
RyanZAGover 9 years ago
This is pretty old, right? These days &#x27;Fast Servers&#x27; refers to bare-metal network interface without a kernel which can achieve far higher throughput than passing it through a kernel&#x2F;epoll.
评论 #10872856 未加载
nbevansover 9 years ago
This pattern is called &quot;I&#x2F;O completion ports&quot; or more generally &quot;overlapped I&#x2F;O&quot; in Windows and has been there since the very first version of Windows NT.
评论 #10873750 未加载
vogover 9 years ago
Very good pattern description, which demonstrates nicely that if basic&#x2F;system assumptions are changing, new pattern need to embrace.<p>I&#x27;d love to see more patterns like that.
评论 #10872518 未加载
notacowardover 9 years ago
&quot;One worker per core&quot; is too simplistic. Yes, as I wrote in a pretty well-known article a dozen years ago, it&#x27;s a good starting point. Yes, it can avoid some context switching. On the other hand, when you have to account for threads doing background stuff it can be too many, leaving you with context thrashing between oversubscribed cores. When you account for threads blocking for reasons beyond your control (e.g. in libraries or due to page faults) it might be too <i>few</i>. The right number of worker threads is usually somewhere around the number of cores, but depends on your exact situation and can even vary over time.<p>Those who do not know the lessons of history...
chillaxtianover 9 years ago
good job of explaining their paradigm, but i did not see any explanation as to why it is better than traditional epoll.<p>what advantage do you gain by separating accepting connections from handling request &#x2F; response?
评论 #10872542 未加载
评论 #10872538 未加载
ameliusover 9 years ago
&gt; Fast Servers<p>Fast in what sense? Throughput, or latency?<p>&gt; One thread per core<p>Oh, I guess the answer to the previous question is &quot;throughput&quot;.
评论 #10872878 未加载
derefrover 9 years ago
This is all about achieving a dataflow architecture by exploiting CPU cache hierarchy, right?<p>We&#x27;re basically talking about going <i>from</i> a design where 10k little &quot;tasklet&quot; state machines are each scheduled onto &quot;scheduler&quot; threads; where during its turn of the loop, each tasklet might run whatever possibly-&quot;cold&quot; logic it likes...<p>...and turning it <i>into</i> a collection of cores that each have a thread for evaluating a particular state-machine state pinned, where each &quot;tasklet&quot; that gets scheduled onto a given core will always be doing the same logic, so that logic can stay &quot;hot&quot; in that core&#x27;s cache—in other words, so that each core can function closer to a SIMD model.<p>Effectively, this is the same thing done in game programming when you lift everything that&#x27;s about to do a certain calculation (i.e. is in a certain FSM state) up into a VRAM matrix and run a GPGPU kernel over it†. This kernel is your separate &quot;core&quot; processing everything in the same state.<p>Either way, it adds up to <i>dataflow architecture</i>: the method of eliminating the overhead of context-switching and scheduling on general-purpose CPUs, by having specific components (like &quot;I&#x2F;O coprocessors&quot; on mainframes, or (de)muxers in backbone switches) for each step of a pipeline, where that component can &quot;stay hot&quot; by doing exactly and only what it does in a synchronous manner.<p>The difference here is that, instead of throwing your own microcontrollers or ASICs at the problem, you&#x27;re getting 80% of the same benefit from just using a regular CPU core but making it avoid executing any non-local jumps: which is to say, not just eliminating OS-level scheduling, but eliminating any sort of top-level per-event-loop switch that might jump to an arbitrary point in your program.<p>This is way more of a win for CPU programming than just what you&#x27;d expect by subtracting the nominal time an OS context-switch takes. Rewriting your logic to run as a collection of these &quot;CPU kernels&quot;—effectively, restricting your code the same way GPGPU kernels are restricted, and then just throwing it onto a CPU core—keeps any of the kernel&#x27;s cache-lines from being evicted, and builds up (and never throws away) an excellent stream of branch-prediction metadata for the CPU to use.<p>The <i>interesting</i> thing, to me, is that a compiler, or an interpreter JIT, could (theoretically) do this &quot;kernel hoisting&quot; for you. As long there was a facility in your language to make it clear to the compiler that a particular function <i>is</i> an FSM state transition-function, then you can code regular event-loop&#x2F;actor-modelled code, and the compiler can transform it into a collection of pinned-core kernels like this as an <i>optimization</i>.<p>The compiler can even take a hybrid approach, where you have some cores doing classical scheduling for all the &quot;miscellaneous, IO-heavy&quot; tasklets, and the rest of the cores being special schedulers that will only be passed a tasklet when it&#x27;s ready to run in that scheduler&#x27;s preferred state. With an advanced scheduler system (e.g. the Erlang VM&#x27;s), the profiling JIT could even notice when your runtime workload has changed to now have 10k of the same transition-function running all the time, and generate and start up a CPU-kernel-thread for it (temporarily displacing one of its misc-work classical scheduler threads), descheduling it again if the workload shifts so that it&#x27;s no longer a win.<p>Personally, I&#x27;ve been considering this approach with GPGPU kernels as the &quot;special&quot; schedulers, rather than CPU cores, but they&#x27;re effectively equivalent in architecture, and perhaps in performance as well: while the GPU is faster because it gets to run your specified kernel in true SIMD parallel, your (non-NUMA) CPU cores get to pass your tasklets&#x27; state around &quot;for free&quot;, which often balances out—ramming data into and out of VRAM is expensive, and the fact that you&#x27;re buffering tasklets to run on the GPGPU as a group potentially introduces a high-latency sync-point for your tasklets. Soft-realtime guarantees might be more important than throughput.<p>---<p>† A fun tangent for a third model: if your GPGPU kernel outputs a separate dataset for each new FSM state each of the data members was found to transition to, and you have <i>other</i> GPGPU kernels for each of the other state-transition functions of your FSM waiting to take those datasets and run them, then effectively you can make your whole FSM live entirely on the GPU as a collection of kernel &quot;cores&quot; passing tasklets back and forth, the same way we&#x27;re talking about CPU cores above.<p>While this architecture probably wins over both the entirely-CPU and CPU-passing-to-GPGPU models for pure-computational workloads (which is, after all, what GPGPUs are supposed to be for), I imagine it would fall over pretty fast if you wanted to do much IO.<p>Does anyone know if GPUs marketed specifically as GPGPUs, like the Tesla cards, have a means for low-latency access to regular virtual memory from within the GPGPU kernel? If they did, staying entirely on the GPGPU would definitely be the dominant strategy. At that point, it might even make sense to have, for example, a GPGPU Erlang VM, or entirely-in-GPGPU console emulators (imagine MAME&#x27;s approach to emulated chips, but with each chip as a GPGPU kernel.)<p>If you can get that, then effectively what you&#x27;ve got at that point is less a GPU, and more a meta-FPGA co-processor with &quot;elastic allocation&quot; of gate-arrays to your VHDL files. System architecture would likely change <i>a lot</i> if we ever got <i>that</i> in regular desktop PCs.
评论 #10874393 未加载
jbarzycover 9 years ago
check out <a href="http:&#x2F;&#x2F;dpdk.org" rel="nofollow">http:&#x2F;&#x2F;dpdk.org</a>. I just delivered a session on DPDK and SR-IOV. There&#x27;s a whole set of libraries, classifications, and frameworks to tune&#x2F;tweak linux systems on x86 in user space.
deathanatosover 9 years ago
It&#x27;s hard to say from the article, but is the first example single threaded? If so, then yes, adding more threads is of course going to speed it up.<p>However, they don&#x27;t each need their own queue; epoll supports being polled by &gt;1 thread. In such a setup _any_ thread available can handle any request; in the author&#x27;s setup, you&#x27;re going to need to make sure any particular thread doesn&#x27;t get too bogged down if the requests are not equal. (That pick function is important.) I&#x27;d be more curious how those two compared. (The author&#x27;s is certainly slightly easier to write, I think.)
评论 #10874660 未加载
JoshTriplettover 9 years ago
I&#x27;d love to see benchmark numbers comparing this approach to others.
评论 #10872702 未加载
评论 #10872689 未加载
listicover 9 years ago
Why aren&#x27;t servers using the proposed pattern already?
评论 #10872736 未加载
评论 #10872842 未加载
cbsmithover 9 years ago
Has everyone forgotten the c10k site, which covers all this?
评论 #10872737 未加载
jbarzycover 9 years ago
check out dpdk.org