TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Balancer Battle – Load testing HAproxy, Nginx and HTTP-Proxy's WebSocket support

81 点作者 V1大约 12 年前

10 条评论

gyepi大约 12 年前
&#62; nginx and haproxy were really close, it's almost not significant enough to say that one is faster or better then the other. But if you look at it from an operations stand point. It's easier to deploy and manage a single nginx server instead of stud and haproxy<p>From an operations standpoint, haproxy has other features (failover, cli management, clustering) that actually makes it a much better load balancer. I usually install all three haproxy, stud, nginx because they are each very good in their specific niche. As for the simplicity of installation, that can be handled with a configuration manager.
评论 #5517867 未加载
ominous_prime大约 12 年前
As a simple reverse proxy for small setups, there is almost no difference between the two, especially when running on a VM. You do miss many of the advanced balancing features in haproxy, but again, this config was a basic reverse proxy, not really load-balancing anything.<p>I haven't worked on these in a couple years, but on real hardware, haproxy could push <i>much</i> more bandwidth. We could saturate 10Gb ethernet fairly easily at the time, which wasn't possible at all with nginx.
评论 #5518165 未加载
评论 #5522324 未加载
jsmeaton大约 12 年前
It would be interesting to see the difference with HAProxy if this line was removed: <a href="https://github.com/observing/balancerbattle/blob/master/haproxy.cfg#L13" rel="nofollow">https://github.com/observing/balancerbattle/blob/master/hapr...</a><p>What the option does is close the connection between the proxy and the backend so that HAProxy will analyse further requests instead of just forwarding to the already established connection.<p>To be fair, I don't know what nginx does - whether connections are kept open or shut down - so I'm not sure that it'd be a fair comparison.<p>Also interesting are the HAProxy built in SSL times. I'm surprised they're so slow. Perhaps the cipher is also the culprit. The cipher can also be specified in HAProxy.<p><pre><code> bind *:8080 ssl crt /root/balancerbattle/ssl/combined.pem ciphers RC4-SHA:AES128-SHA:AES:!ADH:!aNULL:!DH:!EDH:!eNULL</code></pre>
评论 #5517941 未加载
评论 #5517759 未加载
otterley大约 12 年前
How many requests are made per connection? In order to better gauge performance we need a 3-axis plot, where the response rate is measured against various request-per-connection values and connection rates.
评论 #5517638 未加载
nodesocket大约 12 年前
&#62; I had the wrong ciphers configured. After some quick tweaking and a confirmation using openssl s_client -connect server:ip<p>Is this in the nginx config? Can anybody elaborate a bit further? Here is what I am currently using in my nginx config for ssl:<p><pre><code> ssl_session_cache shared:SSL_CACHE:8m; ssl_session_timeout 5m; # Mitigate BEAST attacks ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on;</code></pre>
评论 #5517442 未加载
评论 #5517435 未加载
thrownaway2424大约 12 年前
Is it just me, or are all these latency numbers terrible? For a local echo server I would expect mean latency at or below 1ms.
评论 #5518640 未加载
hoop大约 12 年前
I'd like to see Hipache tested against these as well. Hipache <a href="https://github.com/dotcloud/hipache" rel="nofollow">https://github.com/dotcloud/hipache</a>
评论 #5517915 未加载
breser大约 12 年前
Even though RC4 is fast you really shouldn't be using it: <a href="http://www.isg.rhul.ac.uk/tls/" rel="nofollow">http://www.isg.rhul.ac.uk/tls/</a>
评论 #5521507 未加载
评论 #5521025 未加载
devicenull大约 12 年前
Why do people always benchmark on virtual machines running on someone else's server, and expect meaningful results?
评论 #5518661 未加载
评论 #5517754 未加载
评论 #5517901 未加载
thebuccaneer大约 12 年前
Where is Varnish in this mix?
评论 #5522346 未加载