Aside from these (most of which have reasonably sane defaults with RDS / Aurora), the biggest thing you can do to help the existing RAM you have is to stop making stupid indexing choices.<p>You probably don’t need (or want) an index on every column.<p>You shouldn’t index a UUIDv4 column if you can ever help it (and by extension, you shouldn’t use them as a PK).<p>Finally, you almost certainly need to REINDEX what you have periodically. I’ve seen indices shrink by 400%, giving back GB of RAM. You want a quick win for speed with zero cost? Gain back enough RAM to have your entire working set cached.
For our production PGSQL databases, we use a combination of PGTuner[0] to help estimate RAM requirements and PGHero[1] to get a live view of the running DB. Furthermore, we use ZFS with the built-in compression to save disk space. Together, these three utilities help keep our DBs running very well.<p>[0] <a href="https://pgtune.leopard.in.ua" rel="nofollow">https://pgtune.leopard.in.ua</a><p>[1] <a href="https://github.com/ankane/pghero">https://github.com/ankane/pghero</a>
Do not forget huge pages configuration in kernel - Postgres will use them and it will save a lot of RAM for large databases, esp with many connections.<p>One of the ways to configure is to continue adding nr_hugepages to kernel, restarting PG and running your workload until you see HugePages_Free growing.<p>E.g.:<p>sysctl.d/10-hugepages.conf:vm.nr_hugepages=20000<p>See how these are used:<p>[root@potato ~]# cat /proc/meminfo | grep -i huge<p>AnonHugePages: 4411392 kB<p>ShmemHugePages: 0 kB<p>FileHugePages: 0 kB<p>HugePages_Total: 20000<p>HugePages_Free: 3103<p>HugePages_Rsvd: 143<p>HugePages_Surp: 0<p>Hugepagesize: 2048 kB
Hugetlb: 40960000 kB
I've always found that the config generator below gets you 90% of the optimizations you'll likely needed.<p><a href="https://pgtune.leopard.in.ua" rel="nofollow">https://pgtune.leopard.in.ua</a>
I find the "pg connections take a lot of memory" to be mostly a myth. The number floating around is 10 - 20MB per connection but it looks it's more like 2MB per connection <a href="https://blog.anarazel.de/2020/10/07/measuring-the-memory-overhead-of-a-postgres-connection/" rel="nofollow">https://blog.anarazel.de/2020/10/07/measuring-the-memory-ove...</a>
One other sneaky way to do this is to get the parameters from an AWS RDS instance of Postgres that matches your server and use those. It's general-purpose, but it's a good starting point...and probably much better than the default Postgres settings.