In my experience tuning postgres the JIT has rarely been a net win. It is very reliant on postgres' cost estimate, which becomes a problem. The more complex the query is, the more likely the JIT is to help, but, in general, the worse the cost estimate becomes. This often leads to cases where the cost estimate causes a full JIT'ing of query, and the time to jit and optimize is longer than the entire runtime of the query with the JIT turned off.<p>I thought maybe this was due to the default operation costs not having been updated for a long time (and being created when there was no such thing as an SSD etc), and so I tried to optimize them. I wrote a script which used a scipy global optimizer (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html" rel="nofollow">https://docs.scipy.org/doc/scipy/reference/generated/scipy.o...</a>) to run a set of real life queries against a copy of my database, and with a goal of finding a postgresql cost estimate configuration that minimized the total query time.<p>This optimization took over a day to run, and it would have taken much longer except I wrote a function to diff query plans and cache the runtimes of each query for identical query plans.<p>Here is what it ended up with:<p><pre><code> cpu_index_tuple_cost = 0.05935355125176216
cpu_operator_cost = 0.012593074414763324
cpu_tuple_cost = 0.15786073958355235
random_page_cost = 0.4529142183444561
seq_page_cost = 1.163448763641469
parallel_tuple_cost = 0.18328552089499073
parallel_setup_cost = 30630.595453975257
join_collapse_limit = 29.62367337441951
from_collapse_limit = 18.30069861231032
geqo_threshold = 20.35752330027858
geqo_effort = 1.9431285958606317
min_parallel_table_scan_size = 42.64686041069217
min_parallel_index_scan_size = 897.8594030984586
jit_above_cost = 1060139.219983833
jit_inline_above_cost = 5674002.74709
jit_optimize_above_cost = 4974481.895365894
</code></pre>
Doing this global optimization was made more difficult by the fact that most, but not all of the costs are only meaningful in terms of relative size. For example, setting random_page_cost to 3x seq_page_cost is just a relative cost, if you scale all these costs by some constant you will get the same query plan. However, the introduction of the *_size parameters have the effect of anchoring all the other parameters, so the search space goes from searching an N-dimensional unit sphere to an entire N dimensional space. While the unit sphere is only one dimension less, it is also a much smaller space in terms of boundaries, since each variable can be limited to the range of (0, pi) without imposing artificial absolute numerical guesses (which could easily exclude the optimal configuration from the searched space) to make the search space small enough that a global optimization is tractable.<p>(Things like join_collapse_limit aren't scale free but they also refer only to the query and not to the data, so they are independent, and besides, they have a very easy to guess range of acceptable values.)<p>Anyway, that is a long way of saying that the above did have a measurable effect on performance for my use case, but only on average, some queries still ended up being slower than using default settings. Looking into these, they were for the most part cases where the rowcount estimates the planner were generating were wrong by orders of magnitude, and this incorrect rowcount composed with the incorrect cost configuration caused the planner to pick a good query plan by luck.<p>To sum up, after analyzing the performance of hundreds and hundreds of queries, I have yet to see the JIT make one significantly faster, and I very often see much much worse performance. When I looked into what the JIT was doing, it appears that it is compiling hundreds and hundreds of functions for somewhat straightforward queries, which looked like a bug, but I didn't have time to look into it further. I ended up just cranking up the thresholds on the usage of the JIT to very very large numbers, under the theory that I want the JIT basically turned off, but if there is a query with a cost estimate that is astronomical, the time taken to JIT that query is probably negligible compared to the total runtime so it probably won't hurt.<p>Meanwhile, if you have a database server with spare CPU (and IO if your database doesn't fit in memory) capacity during peak times, I would look at tuning the parallel query parameters to encourage more parallelization. So far in my testing this can give you very close to a 1/K query time for a query that uses K workers. It is fantastic.