> Here are recommended naming practices:<p>Recommended by whom, based on what?<p>Everything in this article seems to be blogspam, with zero links, sources, or rationales for the decisions provided.
Regarding streaming replication, these days you might want to also consider a “disaggregated storage-compute” setup, like AWS Aurora, or Neon (which is open-source I believe).<p>Re partial indexes - those are great, though you have to be careful to keep them updated as your queries evolve. E.g. if you have a partial index on a set of enum values, you should make sure to check and potentially update it whenever you add a new enum variant.
The access control portion is not clear to me: `current_user_organization_id()` where is this reading from? We tried using `set_config` <a href="https://www.postgresql.org/docs/current/functions-admin.html" rel="nofollow">https://www.postgresql.org/docs/current/functions-admin.html</a> and it was incredibly slow, it was not designed for how we were using which was included connection pooling and each query ran a difference "current_user_id" set_config.
You will find major performance improvements by using `include` with your indexes. For instance, on something trivial like counting votes on a blog post:<p><pre><code> create table post_vote (
post_id bigint not null references posts (id),
user_id bigint not null references users (id),
amount smallint not null,
primary key (post_id, user_id) include (amount)
);
select sum(amount) from post_vote where post_id = :'post_id';
</code></pre>
Upvotes are 1, downvotes are -1. This will be a nearly instantaneous index-only scan summing the amount values.
I'm amazed that this has reached the almost-top of HN.<p>It's a very confused article that (IMO) is AI Slop.<p>1. Naming conventions is a weird one to start with, but okay. For the most part you don't need to worry about this with PKs, FKs, Indices etc. Those PG will automatically generate for you with the correct syntax.<p>2. Performance optimization, yes indices are create. But don't name them manually let PG name them for you. Also, where possible _always_ create the index concurrently. This does not lock the table. Important if you have any kind of scale.<p>3. Security bit of a weird jump as you've gone from app tier concern to PG management concerns. But I would say RLS isn't worth it. And the best security is going to be tightly control what can read and write. Point your reads to a read only replica.<p>4. pg_dump was never meant to be a backup method; see <a href="https://www.postgresql.org/message-id/flat/70b48475-7706-4268-990d-fd522b038d96%40eisentraut.org" rel="nofollow">https://www.postgresql.org/message-id/flat/70b48475-7706-426...</a><p>5. You don't need to schedule VACUUM or ANALYZE. PG has the AUTO version of both.<p>6. Jesus, never use a DEFAULT value on a large table that causes a table rewrite which will cause you downtime. Don't use varchar(n) <a href="https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_char.28n.29" rel="nofollow">https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use...</a><p>This is an awful article.