These changes look fantastic.<p>If I may hijack the thread with some more general complaints though, I wish the Postgres team would someday prioritize migration. Like make it easier to make all kinds of DB changes on a live DB, make it easier to upgrade between postgres versions with zero (or low) downtime, etc etc.<p>Warnings when the migration you're about to do is likely to take ages because for some reason it's going to lock the entire table, instant column aliases to make renames easier, instant column aliases with runtime typecasts to make type migrations easier, etc etc etc. All this stuff is currently extremely painful for, afaict, no good reason (other than "nobody coded it", which is of course a great reason in OSS land).<p>I feel like there's a certain level of stockholm syndrome in the sense that to PG experts, these things aren't that painful anymore because they know all the pitfalls and gotchas and it's part of why they're such valued engineers.
PostgreSQL is one of the most powerful and reliable pieces of software I've seen run at large scale, major kudos to all the maintainers for the improvements that keep being added.<p>> PostgreSQL 14 extends its performance gains to the vacuuming system, including optimizations for reducing overhead from B-Trees. This release also adds a vacuum "emergency mode" that is designed to prevent transaction ID wraparound<p>Dealing with transaction ID wraparounds in Postgres was one of the most daunting but fun experiences for me as a young SRE. Each time a transaction modifies rows in a PG database, it increments the transaction ID counter. This counter is stored as a 32-bit integer and it's critical to the MVCC transaction semantics - a transaction with a higher ID should not be visible to a transaction with a lower ID. If the value hits 2 billion and wraps around, disaster strikes as past transactions now appear to be in the future. If PG detects it is reaching that point, it complains loudly and eventually stops further writes to the database to prevent data loss.<p>Postgres avoids getting anywhere close to this situation in almost all deployments by performing routine "auto-vacuums" which mark old row versions as "frozen" so they are no longer using up transaction ID slots. However, there are a couple situations where vacuum will not be able to clean up enough row versions. In our case, this was due to long-running transactions that consumed IDs but never finished. Also it is possible but highly inadvisable to disable auto-vacuums. Here is a postmortem from Sentry who had to deal with this leading to downtime: <a href="https://blog.sentry.io/2015/07/23/transaction-id-wraparound-in-postgres" rel="nofollow">https://blog.sentry.io/2015/07/23/transaction-id-wraparound-...</a><p>It looks like the new vacuum "emergency mode" functionality starts vacuuming more aggressively when getting closer to the wraparound event, and as with every PG feature highly granular settings are exposed to tweak this behaviour (<a href="https://www.postgresql.org/about/featurematrix/detail/360/" rel="nofollow">https://www.postgresql.org/about/featurematrix/detail/360/</a>)
Once again, thanks to all the contributors that provided these awesome new features, translations and documentation.<p>It's amazing what improvements we can get through public collaboration.
If you want to test the new features on a Mac, we've just uploaded a new release of Postgres.app: <a href="https://postgresapp.com/downloads.html" rel="nofollow">https://postgresapp.com/downloads.html</a>
I know this isn't even a big enough deal to mention in the news release, but I am massively excited about the new multirange data types. I work with spectrum licensing and range data types are a godsend (for representing spectrum ranges that spectrum licenses grant). However, there are so many scenarios where you want to treat multiple ranges like a single entity (say, for example, an uplink channel and a downlink channel in an FDD band). And there are certain operations like range differences (e.g. '[10,100)' - '[50,60)'), that aren't possible without multirange support. For this, I am incredibly grateful.<p>Also great is the parallel query support for materialized views, connection scalability, query pipelining, and jsonb accessor syntax.
I converted from MySQL (before whole MariaDB and fork), and I've been happier with every new version. My biggest moment of joy was JSONB and it keeps getting better. Can we please make the connections lighter so that I don't have to use stuff like pgbouncer in the middle? I would love to see that in future versions.
PostgreSQL is one of those tools I know I can always rely on for a new use-case. There are very few cases where it can't do exactly what I need (large scale vector search/retrieval).<p>Congrats on the 14.0 release.<p>The pace of open source has me wondering what we'll be seeing 50 years from now.
Fantastic piece of software.
The only major missing feature that I can think of is
Automatic Incremental Materialized View Updates.
I'm hoping that this good work in progress makes it to v15 -
<a href="https://yugonagata-pgsql.blogspot.com/2021/06/implementing-incremental-view.html" rel="nofollow">https://yugonagata-pgsql.blogspot.com/2021/06/implementing-i...</a>
This looks like an amazing release! Here are my favorite features in order:<p>• Up to 2x speed up when using many DB connections
• ANALYZE runs significantly faster. This should make PG version upgrades much easier.
• Reduced index bloat. This has been improving in each of the last few major releases.
• JSON subscript syntax, like column['key']
• date_bin function to group timestamps to an interval, like every 15 minutes.
• VACUUM "emergency mode" to better prevent transaction ID wraparound
Somewhat related, but does anybody have suggestions for a quality PostgreSQL desktop GUI tool, akin to pgAdmin3? Not pgAdmin 4, whose usability is vastly inferior.<p>DBeaver is adequate, but not really built with Postgres in mind.
If you’d like to try out PostgreSQL in a nice friendly hosted fashion then I highly recommend supabase.io<p>I came from MySQL and so I’m still just excited about the basic stuff like authentication and policies, but I really like how they’ve also integrated storage with the same permissions and auth too.<p>It’s also open source so if you can to just host it yourself you stil can.<p>And did I mention they’ll do your auth for you?
Any suggestions to learn and go deep in PostgreSQL for someone who worked mostly on NoSQL (MongoDB)?<p>From the few days I have explored it, it is absolutely incredible, so congratulations for the work done and good luck on keeping the quality so high!
I'm trying to understand if with v14 I will be able to connect Debezium to a "slave" node and not to the "master" in order to read the WAL but can't figure it out.
Can someone help me with this?
Postgres is my bread and butter for pretty much every project. Congratulations to the team, you work on and continue to improve one of the most amazing pieces of software ever created.
Congratulations and thanks to all involved! Do I understand correctly that, at this time, while PG has data sharding and partitioning capabilities, it does not offer some related features found in Citus Open Source (shard rebalancer, distributed SQL engine and transactions) and in Citus on Azure aka Hyperscale (HA and streaming replication, tenant isolation - I'm especially interested in the latter one)? Are there any plans for PG to move toward this direction?
Disappointed by the release. No big changes. Still using processes instead of threads for connections. No build-in sharding/high availability (like Sql Server Always On Availability Group). No good way to pass session variables to triggers (like username). No scheduled tasks like in MySql. Temporal tables are still not supported 10 years after the spec. is ready.
Can someone who uses Babelfish for PostgreSQL compatibility with SQL Server commands please describe their experience, success, hurdles, etc. We would move to PostgreSQL if made easier by such a tool. Thanks!
The query parallelism for foreign data wrappers bring PostgreSQL one step closer to being the one system that can tie all your different data sources together into one source.<p>Really exciting stuff.
Here we are, at a fantastic version 14, and still no sign of an MySQL AB-like company able to provide support and extensions to a great piece of open source software. There's a few small ones, yes, but nothing at the billion dollar size.<p>I am still unable to understand why.