TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Bjoern: A screamingly fast Python WSGI server written in C.

52 点作者 jgalvez超过 14 年前

8 条评论

FooBarWidget超过 14 年前
I think all emphasis on server software speed is overrated. In pretty much everything but hello world, the vast majority of the time is spent inside the web app, not in the server.<p>Yes, lowering the web server overhead is a good thing. But I think in most cases it's already so low that further reducing this overhead results in no noticeable impact in real-life scenarios. At some point you'll just be benchmarking how fast the kernel is at doing connect(), read() and write() - in other words how fast your computer can do nothing.<p>For example, let's consider this thought experiment:<p>Someone here mentioned Mongrel2 getting 4000 req/sec. Let's replace the name "Mongrel2" with "Server A" because this thought experiment is not limited to Mongrel2, but all servers. I assume he's benchmarking a hello world app on his laptop. Suppose that a hypothetical Server B gets "only" 2000 req/sec. One might now (mistakenly) conclude that:<p>- Server B is <i>a lot</i> slower.<p>- One should use Server A instead of Server B in high-traffic production environments.<p>Now put Server A behind HAProxy. HAproxy is known as a high-performance HTTP proxy server with minimal overhead. Benchmark this setup, and watch req/sec drop to about 2000-3000 (when benchmarked on a typical dual core laptop).<p>What just happened? Server B <i>appears</i> to be very slow. But the reality is that both Server A and Server B are so fast that doing even a minimum amount of extra work will have a significant effect on the req/sec number. In this case, the overhead of an extra context switch and a read()/write() call to the kernel is already enough to make the req/sec number drop by half. Any reasonably complex web app logic will make the number drop so much that the performance difference between the different servers become negligible.
评论 #2037178 未加载
daeken超过 14 年前
This looks like a really cool project, but the comparisons of other webservers are uh, a bit on the negative side. No code is perfect, even if it's better than the rest. While I find myself falling into the same thinking, it's rarely a good idea, and it certainly doesn't help your project's image.
评论 #2036731 未加载
评论 #2036737 未加载
评论 #2036829 未加载
wildmXranat超过 14 年前
So he started off a bit aggressively, so what. I looked at the code and it's very spartan and light. It's quite the opposite of the competition and it works for him. Good job on what looks like to be an advanced coders Hello World: web server.
jcw超过 14 年前
I've been doing some research on small web servers, trying to understand didiwiki's code (a web server and wiki engine in ~2k lines of C):<p><a href="http://c2.com/cgi/wiki?DidiWiki" rel="nofollow">http://c2.com/cgi/wiki?DidiWiki</a><p>This is a nice introduction to writing a minimal web server:<p><a href="http://www.ibm.com/developerworks/systems/library/es-nweb/index.html" rel="nofollow">http://www.ibm.com/developerworks/systems/library/es-nweb/in...</a>
评论 #2037776 未加载
kennu超过 14 年前
Any experiences with Bjoern's Django compatibility?
评论 #2037207 未加载
yashh超过 14 年前
Interesting. I think I am gonna benchmark this against gunicorn with a simple flask app.
评论 #2036815 未加载
jrockway超过 14 年前
Is the name a reference to the main character of Peggle?<p>(Also, how does this compare speed-wise to Mongrel2? It easily handles the 4000 requests/second that my PSGI handler can do. I am sure it could do more if the backend was faster.)
评论 #2037453 未加载
verysimple超过 14 年前
Thanks for this. One of my goals for 2011 is to get into network programming. At less than 1000 lines of code, I think I may just have to go through the source.