The author talks a bit about the architecture of PostgreSQL transactions, touching on lazy transaction ID consumption and vacuuming. Notably, writes require IDs but reads do not. So this is focused on write-optimized workloads.<p>If you want to get the basic tl;dr which answers the headline: these IDs will last so long it’s almost not worth quantifying. This is an obvious calculation even if you assume ostentatatious performance requirements three orders of magnitude greater than the author’s:<p><pre><code> 2^64 / (86,000 * 1,000,000,000) = 213,503.9
</code></pre>
The author uses 1,000,000 writes/second; I prefer 1,000,000,000 since it’s more ridiculous. There are 86,000 seconds in a day. It will take you the better part of a millenium to exhaust those IDs, assuming you consume an average of one billion every single second.<p>The author didn’t talk about collisions, but those are worth mentioning because you could even confidently assign these randomly instead of incrementally. Since a collision will occur (in expectation) after 2^63 transactions, you shouldn’t even have to worry about a single one occuring (on average) for almost 300 years.<p>Of course, using 64-bit IDs comes with nontrivial space increase - every single tuple will increase by a factor of 2.<p>EDIT: Original collision estimate is wrong, see corrections. I took (2^n)/2 = 2^(n-1) as the birthday bound instead of 2^(n/2).