Is Postgres ever going to allow automatic partitioning?<p>Something like:<p><pre><code> CREATE TABLE logs (
time TIMESTAMPTZ,
data JSONB
) PARTITION ON time IN INTERVAL '1 day'
</code></pre>
And it just creates/manages these partitions for every day?<p>If you can already manage partitions like this manually it feels like the next step is to just have it be automatic. So you have less of a need to switch to Timescale or ClickHouse or whatever other database as the amount of data you're storing/querying grows. (Yeah that's a handwave-y suggestion but at least you could stick with Postgres for longer.)
I am building my own database engine using some data objects (key-value stores) I invented to form columnar store tables. It has some really fast query speeds and analytic features (e.g. pivot tables) that test favorably compared to Postgres and other RDBMS offerings. (<a href="https://www.youtube.com/watch?v=OVICKCkWMZE" rel="nofollow">https://www.youtube.com/watch?v=OVICKCkWMZE</a>)<p>Partitioning a big table is definitely on my TODO list. How big does a typical table need to grow before partitioning is seen as a 'necessity'? What are some ways current partitioning strategies have made things too difficult?
We use Postgres partitioning quite successfully to handle our customer based data. Everything is partitioned by customer and some customers are partitioned further.<p>One gotcha to be careful with is that if you run a query spanning multiple partitions, it will run them all at once and if your database isn't super big - will bring it to its knees.<p>Outside of that really no issues. We also use Timescale quite heavily, which also works fantastic.