I'm skeptical of the performance numbers. First, like others here I don't believe nginx's performance will be a bottleneck for HTTP/2. Beyond that, I suspect there are cases in which this code is much worse than nginx.<p>Here's one. Look at the example request loop on <<a href="https://github.com/h2o/picohttpparser/>" rel="nofollow">https://github.com/h2o/picohttpparser/></a>. It reads from a socket, appending to an initially-empty buffer. Then it tries to parse the buffer contents as an HTTP request. If the request is incomplete, the loop repeats. (h2o's lib/http1.c:handle_incoming_request appears to do the same thing.)<p>In particular, phr_parse_request doesn't retain any state between attempts. Each time, it goes through the whole buffer. In the degenerate case in which a client sends a large (n-byte) request one byte at a byte, it uses O(n^2) CPU for parsing. That extreme should be rare when clients are not malicious, but the benchmark is probably testing the other extreme where all requests are in a single read. Typical conditions are probably somewhere between.