The actual title of the article is "Postgres Autovacuum is Not the Enemy". The word "Postgres" is a critical element here and should not be left off.
The main problem here is that autovacuum threshold is something like c + m * nrows, and in large configurations you could have all sorts of table sizes.<p>How much change is a lot? 1% of the table + 50 rows (for small tables)? I would argue that sometimes is better to use a fixed threshold, e.g. c = 1000, m=0<p>All these approaches are hit or miss and are different per configuration. What I found useful is to choose the best parameters you can think of without forcing autovacuum to run everytime, and have an external job run vacuum manually to cleanup whatever got missed... eventually you can figure out the right configuration.
I find the recommendation to leave the cost limit alone strange. The problem is this is a global limit, shared by all autovacuum workers. The default (200) means all workers combined should not do more than 8 MB/s reads or 4 MB/s writes, which on current hardware are rather low limits. Increasing the number of workers is good, but the total autovacuum throughput does not change - there will be more workers but they'll go slower.<p>Also a note regarding the delay - the resolution really depends on hardware. Some timers have only 10ms resolution, for example.
What ive found is that if wih the default settings, if you're running over 150 million inserts/updates/deletes a day, your database is going to halt because of transaction wraparound errors. autovacuum simply cant catch up.<p>the solution for me was to batch inserts and updates in one transaction.