TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How to get the most out of Postgres memory settings

272 点作者 samaysharma12 个月前

8 条评论

sgarland12 个月前
Aside from these (most of which have reasonably sane defaults with RDS &#x2F; Aurora), the biggest thing you can do to help the existing RAM you have is to stop making stupid indexing choices.<p>You probably don’t need (or want) an index on every column.<p>You shouldn’t index a UUIDv4 column if you can ever help it (and by extension, you shouldn’t use them as a PK).<p>Finally, you almost certainly need to REINDEX what you have periodically. I’ve seen indices shrink by 400%, giving back GB of RAM. You want a quick win for speed with zero cost? Gain back enough RAM to have your entire working set cached.
评论 #40657537 未加载
评论 #40660757 未加载
评论 #40657435 未加载
评论 #40661423 未加载
评论 #40658395 未加载
rtp4me12 个月前
For our production PGSQL databases, we use a combination of PGTuner[0] to help estimate RAM requirements and PGHero[1] to get a live view of the running DB. Furthermore, we use ZFS with the built-in compression to save disk space. Together, these three utilities help keep our DBs running very well.<p>[0] <a href="https:&#x2F;&#x2F;pgtune.leopard.in.ua" rel="nofollow">https:&#x2F;&#x2F;pgtune.leopard.in.ua</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;ankane&#x2F;pghero">https:&#x2F;&#x2F;github.com&#x2F;ankane&#x2F;pghero</a>
评论 #40660991 未加载
评论 #40659546 未加载
limaoscarjuliet12 个月前
Do not forget huge pages configuration in kernel - Postgres will use them and it will save a lot of RAM for large databases, esp with many connections.<p>One of the ways to configure is to continue adding nr_hugepages to kernel, restarting PG and running your workload until you see HugePages_Free growing.<p>E.g.:<p>sysctl.d&#x2F;10-hugepages.conf:vm.nr_hugepages=20000<p>See how these are used:<p>[root@potato ~]# cat &#x2F;proc&#x2F;meminfo | grep -i huge<p>AnonHugePages: 4411392 kB<p>ShmemHugePages: 0 kB<p>FileHugePages: 0 kB<p>HugePages_Total: 20000<p>HugePages_Free: 3103<p>HugePages_Rsvd: 143<p>HugePages_Surp: 0<p>Hugepagesize: 2048 kB Hugetlb: 40960000 kB
评论 #40659108 未加载
tiffanyh12 个月前
I&#x27;ve always found that the config generator below gets you 90% of the optimizations you&#x27;ll likely needed.<p><a href="https:&#x2F;&#x2F;pgtune.leopard.in.ua" rel="nofollow">https:&#x2F;&#x2F;pgtune.leopard.in.ua</a>
lazyant12 个月前
I find the &quot;pg connections take a lot of memory&quot; to be mostly a myth. The number floating around is 10 - 20MB per connection but it looks it&#x27;s more like 2MB per connection <a href="https:&#x2F;&#x2F;blog.anarazel.de&#x2F;2020&#x2F;10&#x2F;07&#x2F;measuring-the-memory-overhead-of-a-postgres-connection&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.anarazel.de&#x2F;2020&#x2F;10&#x2F;07&#x2F;measuring-the-memory-ove...</a>
nextaccountic12 个月前
Why doesn&#x27;t Postgres autotune those settings? Determining this experimentally seems to be very fragile
评论 #40658006 未加载
mannyv12 个月前
One other sneaky way to do this is to get the parameters from an AWS RDS instance of Postgres that matches your server and use those. It&#x27;s general-purpose, but it&#x27;s a good starting point...and probably much better than the default Postgres settings.
rc_mob12 个月前
Well.. that was very detailed. Will bookmark for when I&#x27;m ready to optimize.