TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Memory Size Matters to PostgreSQL

60 点作者 PaulHoule大约 2 个月前

5 条评论

rtp4me大约 2 个月前
Interesting read, but I find it very lacking. PGSQL memory consists of lots of parts; shared_buffers, work_mem, maintenance_work_men, OS Cache size, etc. For those who are interested in a deeper Postgresql tuning tutorial, I suggest this[1] as it highlights a number of important tuning options - specifically for PGSQL 17. Once your DB has been running for a while, run the postgresql-tuner app[2] to see if you have correctly applied RAM settings[2]. The repo has been around for a long time, but I find the recommendations (especially the estimated RAM usage) really good.<p>[1] <a href="https:&#x2F;&#x2F;www.pgedge.com&#x2F;blog&#x2F;postgresql-performance-tuning" rel="nofollow">https:&#x2F;&#x2F;www.pgedge.com&#x2F;blog&#x2F;postgresql-performance-tuning</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;jfcoz&#x2F;postgresqltuner">https:&#x2F;&#x2F;github.com&#x2F;jfcoz&#x2F;postgresqltuner</a>
评论 #43725030 未加载
wmf大约 2 个月前
I don&#x27;t fully understand this article but this point stuck out as probably fractally wrong:<p><i>Modern DDR4 memories have a theoretical throughput of 25-30 GB&#x2F;s. This is more realistically ranging between 5-10 GB&#x2F;s. With a 100 GB full packed shared buffer the time required to perform one single full scan ranges between 3 and 20 seconds.</i><p>Obviously DDR5 now exists and servers have multiple memory channels giving total memory bandwidth more like 200-500 GB&#x2F;s. An old rule of thumb is that a computer should be able to read its entire memory in one second, although these days it may be more like 1-4 seconds.<p>The clock replacement algorithm only needs to read metadata, so a full sweep of the metadata for 100 GB of buffers should be milliseconds not seconds. (If they&#x27;re talking about a table scan instead then obviously reading from buffers is going to be faster than disk.)
评论 #43725613 未加载
评论 #43725556 未加载
billvsme大约 2 个月前
I often use pgtune to set the memory size, just select the correct information<p><a href="https:&#x2F;&#x2F;pgtune.leopard.in.ua&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pgtune.leopard.in.ua&#x2F;</a>
hyperman1大约 2 个月前
I love working with postgres, but this is one array where it needs a lot more love. If I deploy MongoDB it takes the whole machine and does its thing. With postgres, you have extremely minimalistic defaults, and then you have to turn a zillion knobs before it does what it should.<p>For beginners, it is a huge footgun, that makes people assume bad performance while evaluating. For the experienced PG admin, it is an annoiance and a time waster. Oh, the VM just gained 64GB RAM? PG will sit there and stare at it.<p>Apart from that, basically everyone starts with the PG guidelines or a generated template(25% for this, 25% divided by number of sessions for that). Then you keep wondering how much performance you left on the table.
itsthecourier大约 2 个月前
for the rest of us:<p><a href="https:&#x2F;&#x2F;pgtune.leopard.in.ua&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pgtune.leopard.in.ua&#x2F;</a>