TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How Rust Lets Us Monitor 30k API calls/min

97 点作者 cfabianski将近 5 年前

12 条评论

meritt将近 5 年前
Sorry, I must be missing something in this blog post because the requirements here sound incredibly minimal. You just needed an HTTP service (sitting behind an Envoy proxy) to process a mere 500 requests&#x2F;second (up to 1MB payload) and pipe them to Kinesis? How much data preparation is happening in Rust? It sounds like all the permission&#x2F;rate-limiting&#x2F;etc happens between Envoy&#x2F;Redis before it ever reaches Rust?<p>I know this comes across as snarky but it really worries me that contemporary engineers think this is a feat worthy of a blog post. For example, take this book from 2003 [1] talking about Apache + mod_perl. Page 325 [2] shows a benchmark: &quot;As you can see, the server was able to respond on average to 856 requests per second... and 10 milliseconds to process each request&quot;.<p>And just to show this isn&#x27;t a NodeJS vs Rust thing, check out these webframework benchmarks using various JS frameworks [3]. The worst performer on there still does &gt;500 rps while the best does 500,000.<p>It&#x27;s 2020, the bar needs to be <i>much</i> higher.<p>[1] <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Practical-mod_perl-Stas-Bekman&#x2F;dp&#x2F;0596002270" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Practical-mod_perl-Stas-Bekman&#x2F;dp&#x2F;059...</a><p>[2] <a href="https:&#x2F;&#x2F;books.google.com&#x2F;books?id=i3Ww_7a2Ff4C&amp;pg=PT356&amp;lpg=PT356" rel="nofollow">https:&#x2F;&#x2F;books.google.com&#x2F;books?id=i3Ww_7a2Ff4C&amp;pg=PT356&amp;lpg=...</a><p>[3] <a href="https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;#section=data-r19&amp;hw=ph&amp;test=db&amp;l=zik0sf-1r" rel="nofollow">https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;#section=data-r19&amp;hw=...</a>
评论 #23542159 未加载
评论 #23542243 未加载
评论 #23542428 未加载
评论 #23542648 未加载
评论 #23545213 未加载
评论 #23550731 未加载
评论 #23541868 未加载
akoutmos将近 5 年前
Great article and thanks for sharing! There are a couple of things that stand out at me as possible architecture smells (hopefully this comes across as positive constructive criticism :)).<p>As someone who has been developing on the BEAM for long time now, it usually sticks out like a sore thumb any time I see Elixir&#x2F;Erlang paired with Redis. Not that there is anything wrong with Redis, but most of the time you can save yourself the additional Ops dependency and application network hop by bringing that state into your application (BEAM languages excel at writing stateful applications).<p>In the article you write that you were using Redis for rate limit checks. You could have very easily bundled that validation into the Elixir application and had for example a single GenServer running per customer that performs the rate limiting validation (I actually wrote a blog post on this using the leaky bucket and token bucket algorithms <a href="https:&#x2F;&#x2F;akoutmos.com&#x2F;post&#x2F;rate-limiting-with-genservers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;akoutmos.com&#x2F;post&#x2F;rate-limiting-with-genservers&#x2F;</a>). Pair this with hot code deployments, you would not lose rate limit values across application deployments.<p>I would be curious to see how much more mileage you could have gotten with that given that the Node application would not have to make network calls to the Elixir service and Redis.<p>Just wanted to share that little tidbit as it is something that I see quite often with people new to the BEAM :). Thanks again for sharing!
评论 #23542047 未加载
didroe将近 5 年前
I&#x27;m one of the engineers that worked on this. It was the first Rust production app code I&#x27;ve written so it was a really fun project.
评论 #23541497 未加载
评论 #23542417 未加载
评论 #23542237 未加载
foxknox将近 5 年前
500 requests a second.
评论 #23542457 未加载
cybervasi将近 5 年前
GC of 500 request&#x2F;s could not have possibly caused a performance issue. Most likely the problem was due to JS code holding on to the 1MB requests for the duration of the asynchronous Kinesis request or a bug in the Kinesis JS library itself. With timeout of 2 minutes, you may end up with up to 30K&#x2F;min x 2min x 1mb = 60GB RAM used. GC would appear running hot during this time but it is only because it is has to scrape more memory somewhere while up to 60gb is being in use.
eggsnbacon1将近 5 年前
They didn&#x27;t mention Java as a possible solution, even though its GC&#x27;s are far better than anything else out there. I have nothing against Rust but if I was at a startup I would save my innovation points for where they&#x27;re mandatory
评论 #23541713 未加载
评论 #23541901 未加载
评论 #23541771 未加载
评论 #23542028 未加载
评论 #23542327 未加载
评论 #23541872 未加载
DevKoala将近 5 年前
There is a couple things I see in this post that I wouldn’t do at all, and I maintain a couple services with orders of magnitude higher QPS. I feel that replacing Node.js with any compiled language would have had the same positive effect.
评论 #23542677 未加载
newobj将近 5 年前
500qps. i think the more interesting story here is what language&#x2F;framework COULDN&#x27;T do this, than which one could.
trimbo将近 5 年前
&gt; After some more research, we appeared to be another victim of a memory leak in the AWS Javascript SDK.<p>Did you try using the kinesis REST API directly: <a href="https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;kinesis&#x2F;latest&#x2F;APIReference&#x2F;API_PutRecord.html" rel="nofollow">https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;kinesis&#x2F;latest&#x2F;APIReference&#x2F;API_...</a>
qrczeno将近 5 年前
That was a real issue we were struggling to solve. Feels like Rust was the right tool for the right job.
hobbescotch将近 5 年前
Having never dealt with issues relating to garbage collection before, how do you go about diagnosing GC issues in a language where that’s all handled for you?
评论 #23541911 未加载
评论 #23542366 未加载
评论 #23541923 未加载
zerubeus将近 5 年前
Feels like a HN post being upvoted just bcz it contains Rust in the title (after reader the article) ...