Distributed in-process cache can make things very fast. See vimeo with group cache: <a href="https://medium.com/vimeo-engineering-blog/video-metadata-caching-at-vimeo-a54b25f0b304" rel="nofollow">https://medium.com/vimeo-engineering-blog/video-metadata-cac...</a><p>Or in-process distributed rate limiting: <a href="https://github.com/mailgun/gubernator" rel="nofollow">https://github.com/mailgun/gubernator</a><p><a href="https://github.com/golang/groupcache" rel="nofollow">https://github.com/golang/groupcache</a>:
groupcache is in production use by dl.google.com (its original user), parts of Blogger, parts of Google Code, parts of Google Fiber, parts of Google production monitoring systems, etc.<p>There was also a paper by google that I can't find right now.<p>Edit: Found it <a href="https://blog.acolyer.org/2019/06/24/fast-key-value-stores/" rel="nofollow">https://blog.acolyer.org/2019/06/24/fast-key-value-stores/</a>