TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to get the most out of Postgres memory settings

272 pointsby samaysharma11 months ago

8 comments

sgarland11 months ago
Aside from these (most of which have reasonably sane defaults with RDS &#x2F; Aurora), the biggest thing you can do to help the existing RAM you have is to stop making stupid indexing choices.<p>You probably don’t need (or want) an index on every column.<p>You shouldn’t index a UUIDv4 column if you can ever help it (and by extension, you shouldn’t use them as a PK).<p>Finally, you almost certainly need to REINDEX what you have periodically. I’ve seen indices shrink by 400%, giving back GB of RAM. You want a quick win for speed with zero cost? Gain back enough RAM to have your entire working set cached.
评论 #40657537 未加载
评论 #40660757 未加载
评论 #40657435 未加载
评论 #40661423 未加载
评论 #40658395 未加载
rtp4me11 months ago
For our production PGSQL databases, we use a combination of PGTuner[0] to help estimate RAM requirements and PGHero[1] to get a live view of the running DB. Furthermore, we use ZFS with the built-in compression to save disk space. Together, these three utilities help keep our DBs running very well.<p>[0] <a href="https:&#x2F;&#x2F;pgtune.leopard.in.ua" rel="nofollow">https:&#x2F;&#x2F;pgtune.leopard.in.ua</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;ankane&#x2F;pghero">https:&#x2F;&#x2F;github.com&#x2F;ankane&#x2F;pghero</a>
评论 #40660991 未加载
评论 #40659546 未加载
limaoscarjuliet11 months ago
Do not forget huge pages configuration in kernel - Postgres will use them and it will save a lot of RAM for large databases, esp with many connections.<p>One of the ways to configure is to continue adding nr_hugepages to kernel, restarting PG and running your workload until you see HugePages_Free growing.<p>E.g.:<p>sysctl.d&#x2F;10-hugepages.conf:vm.nr_hugepages=20000<p>See how these are used:<p>[root@potato ~]# cat &#x2F;proc&#x2F;meminfo | grep -i huge<p>AnonHugePages: 4411392 kB<p>ShmemHugePages: 0 kB<p>FileHugePages: 0 kB<p>HugePages_Total: 20000<p>HugePages_Free: 3103<p>HugePages_Rsvd: 143<p>HugePages_Surp: 0<p>Hugepagesize: 2048 kB Hugetlb: 40960000 kB
评论 #40659108 未加载
tiffanyh11 months ago
I&#x27;ve always found that the config generator below gets you 90% of the optimizations you&#x27;ll likely needed.<p><a href="https:&#x2F;&#x2F;pgtune.leopard.in.ua" rel="nofollow">https:&#x2F;&#x2F;pgtune.leopard.in.ua</a>
lazyant11 months ago
I find the &quot;pg connections take a lot of memory&quot; to be mostly a myth. The number floating around is 10 - 20MB per connection but it looks it&#x27;s more like 2MB per connection <a href="https:&#x2F;&#x2F;blog.anarazel.de&#x2F;2020&#x2F;10&#x2F;07&#x2F;measuring-the-memory-overhead-of-a-postgres-connection&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.anarazel.de&#x2F;2020&#x2F;10&#x2F;07&#x2F;measuring-the-memory-ove...</a>
nextaccountic11 months ago
Why doesn&#x27;t Postgres autotune those settings? Determining this experimentally seems to be very fragile
评论 #40658006 未加载
mannyv11 months ago
One other sneaky way to do this is to get the parameters from an AWS RDS instance of Postgres that matches your server and use those. It&#x27;s general-purpose, but it&#x27;s a good starting point...and probably much better than the default Postgres settings.
rc_mob11 months ago
Well.. that was very detailed. Will bookmark for when I&#x27;m ready to optimize.