The article mentions they took some inspiration from a Stripe blogpost/gist; for convenience, here's the direct link to the relevant lua code (helps compare what is interesting/unique about github's approach):<p><a href="https://gist.github.com/ptarjan/e38f45f2dfe601419ca3af937fff574d#file-request_rate_limiter-lua" rel="nofollow">https://gist.github.com/ptarjan/e38f45f2dfe601419ca3af937fff...</a><p>(disclaimer, I worked on the rate limiter at Stripe a bit, but can't remember how similar the 2019-era code was to what you see there; I think broadly similar).
This is strange to me. Did Github do client-based sharding because they were trying to get around the upfront key enumeration limitation in Lua scripts? Why didn't they use the cluster's ability to proxy requests to the appropriate sharded worker?<p>As-is, they could have just passed `rate_limit_key+':exp'` as a second KEYS entry and it would have ensured the key existed for operation. They were deriving keys off of apriori information, so they could have just as easily foregone the client-side complexity and just put the redis cluster in a sharded configuration.<p>I wonder what sorts of performance impact this had (the page doesn't mention it). Client-side sharding almost certainly increased the codebase complexity and it doesn't seem like they measured any real impact from doing it this way (or maybe they just chose not to report it).
We had a saying at my old job: if something’s broken it’s never Redis. Redis is such a tank in my experience. We set it up. Secured it. And then forgot about it.
I originally thought this article was going to be about John Berryman's proposed Redis rate limiter [0]<p>[0] <a href="http://blog.jnbrymn.com/2021/03/18/estimated-average-recent-request-rate-limiter.html" rel="nofollow">http://blog.jnbrymn.com/2021/03/18/estimated-average-recent-...</a>