TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Has anyone scaled Postgres to over a billion rows?

2 pointsby anindha8 months ago
What server did you use? How did you do it? I am trying to avoid using RDS or something similar since the costs will be very high.

3 comments

phamilton8 months ago
Yes. Running multiple multi-billion row Postgres DBs on AWS Aurora.<p>The number of rows isn&#x27;t that consequential honestly. Btrees are very fast at navigating deep hierarchies. It&#x27;s the volume and complexity of traffic that matters.<p>On AWS Aurora we can run 10 readers to handle our peak midday traffic (10M+ daily active users).<p>I wouldn&#x27;t shy away from doing billion row DBs on-prem if that&#x27;s what the finances detail. Postgres can handle it. But the new wave of scalable Postgres (alloydb, aurora, neondb) will make it easy.
faebi8 months ago
Yes, openstreetmap.org has over 9 billion rows in a table and documents how they do it. See <a href="https:&#x2F;&#x2F;wiki.openstreetmap.org&#x2F;wiki&#x2F;Stats" rel="nofollow">https:&#x2F;&#x2F;wiki.openstreetmap.org&#x2F;wiki&#x2F;Stats</a>
t90fan8 months ago
we had a 20 TB ballpark (so I would guess many tens or even hundreds of billions of rows for sure) postgres database at a place I worked in ~2015, hosted on-prem, I don&#x27;t recall it causing them too much hassle, main thing I remember was the server had an very large amount of RAM (512GB which was loads and loads back then), and lots (for the time) of cores, something like 16, but was otherwise a fairly standard ~£50k ballpark piece of HP kit.
评论 #41550121 未加载