TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to build a distributed throttling system with Nginx, Lua, and Redis

141 pointsby dreampeppers99about 6 years ago

6 comments

cobbzillaabout 6 years ago
For those working in a Java JAX-RS environment and looking for an additional rate filter on the app server itself, here is a similar Redis+Lua rate limiter implemented as a Jersey&#x2F;JAX-RS filter [1].<p>It supports multiple limits, for example max 100 requests&#x2F;minute and 10000&#x2F;day, etc. The lua magic is here [2].<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;cobbzilla&#x2F;cobbzilla-wizard&#x2F;blob&#x2F;master&#x2F;wizard-server&#x2F;src&#x2F;main&#x2F;java&#x2F;org&#x2F;cobbzilla&#x2F;wizard&#x2F;filters&#x2F;RateLimitFilter.java" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cobbzilla&#x2F;cobbzilla-wizard&#x2F;blob&#x2F;master&#x2F;wi...</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;cobbzilla&#x2F;cobbzilla-wizard&#x2F;blob&#x2F;master&#x2F;wizard-server&#x2F;src&#x2F;main&#x2F;resources&#x2F;org&#x2F;cobbzilla&#x2F;wizard&#x2F;filters&#x2F;api_limiter_redis.lua" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cobbzilla&#x2F;cobbzilla-wizard&#x2F;blob&#x2F;master&#x2F;wi...</a>
评论 #19414962 未加载
rogerdonutabout 6 years ago
Something very similar can be achieved in HAProxy using a powerful feature called stick tables. [1] [2] [3]<p>[1] <a href="https:&#x2F;&#x2F;www.haproxy.com&#x2F;blog&#x2F;introduction-to-haproxy-stick-tables&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.haproxy.com&#x2F;blog&#x2F;introduction-to-haproxy-stick-t...</a><p>[2] <a href="https:&#x2F;&#x2F;www.haproxy.com&#x2F;blog&#x2F;bot-protection-with-haproxy&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.haproxy.com&#x2F;blog&#x2F;bot-protection-with-haproxy&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;www.haproxy.com&#x2F;blog&#x2F;using-haproxy-as-an-api-gateway-part-1&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.haproxy.com&#x2F;blog&#x2F;using-haproxy-as-an-api-gateway...</a>
评论 #19414416 未加载
brataoabout 6 years ago
Awesome post, from a fellow Brazilian!<p>We did a very similar implementation (although not distributed) for a similar problem, using Redis and Laravel.<p>We had MANY people crawling our website, and we would prefer that they use our API for that. Using Redis we block IPs who accessed our website more than X times not logged-in (200 URLs right now).<p>We also had the requirement that all good bots(Bing, Baidu, Google) should pass-thru without blocks or any slowdown. Another requirement, was that those good bots should be verified(Reverse &amp; Forward DNS Lookup) before entering out good bot list.<p>It is working great for our high-traffic website ( 2 Mi hits&#x2F;day). You can check our work here: <a href="https:&#x2F;&#x2F;github.com&#x2F;Potelo&#x2F;laravel-block-bots" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Potelo&#x2F;laravel-block-bots</a>
评论 #19414410 未加载
adontzabout 6 years ago
Not to say they did anything wrong, great work! But if facing the same problem, but for inhouse solution I&#x27;d consider using auth_requrest in the first place.<p><a href="https:&#x2F;&#x2F;nginx.org&#x2F;en&#x2F;docs&#x2F;http&#x2F;ngx_http_auth_request_module.html" rel="nofollow">https:&#x2F;&#x2F;nginx.org&#x2F;en&#x2F;docs&#x2F;http&#x2F;ngx_http_auth_request_module....</a><p>To me, advantage is archtectural, that I would not have specify which parameters of request are considered or how are they processed. Disadvantage is semantic, returning 403 instead of 429. But original article states returning 403 anyway.<p>And also, regarding rate limiting by IP, I think it should be done for 10x-100x of single user limit, just as first line of defense. Also nginx rate limiting has notion of burst which helps filter out &quot;smart&quot; crawlers, which unlike users, send requests for hours.
评论 #19418473 未加载
ddorian43about 6 years ago
A more efficient (but no histogram) way would be native redis module (rust) <a href="https:&#x2F;&#x2F;github.com&#x2F;brandur&#x2F;redis-cell" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;brandur&#x2F;redis-cell</a>
评论 #19414305 未加载
评论 #19414104 未加载
chmod775about 6 years ago
Probably better to use a redis hash &quot;map&quot; instead of multiple keys. Redis will store these very efficiently too if you only have a few keys within it.