Yes. Running multiple multi-billion row Postgres DBs on AWS Aurora.<p>The number of rows isn't that consequential honestly. Btrees are very fast at navigating deep hierarchies. It's the volume and complexity of traffic that matters.<p>On AWS Aurora we can run 10 readers to handle our peak midday traffic (10M+ daily active users).<p>I wouldn't shy away from doing billion row DBs on-prem if that's what the finances detail. Postgres can handle it. But the new wave of scalable Postgres (alloydb, aurora, neondb) will make it easy.
Yes, openstreetmap.org has over 9 billion rows in a table and documents how they do it. See <a href="https://wiki.openstreetmap.org/wiki/Stats" rel="nofollow">https://wiki.openstreetmap.org/wiki/Stats</a>
we had a 20 TB ballpark (so I would guess many tens or even hundreds of billions of rows for sure) postgres database at a place I worked in ~2015, hosted on-prem, I don't recall it causing them too much hassle, main thing I remember was the server had an very large amount of RAM (512GB which was loads and loads back then), and lots (for the time) of cores, something like 16, but was otherwise a fairly standard ~£50k ballpark piece of HP kit.