Definitely have been bitten with the query statistics issue before. I worked with a colleague once who was adamant that we build our backend on MongoDB, but I was able to convince him to build on Postgres because of it's JSONB support. I don't get why, since schema updates are generally very cheap with databases like Postgres (adding columns without a default or deleting columns is basically just a metadata change), but some developers believe its worth the headache of going schema-less to avoid migrations. In a sense, that suggestion kind of bit me in the ass when we started having some painfully slow report generation queries that should have been using indexes, but were doing table scans because of the lack of table statistics. In a much larger sense, I'm still thankful we never used MongoDB.<p>Protip: Use the planner config settings[1] (one of which is mentioned in this article) with SET LOCAL in a transaction if you're really sure the query planner is giving you guff. On more structured data that Postgres can calculate statistics on, let it do its magic.<p>[1]: <a href="https://www.postgresql.org/docs/current/static/runtime-config-query.html" rel="nofollow">https://www.postgresql.org/docs/current/static/runtime-confi...</a>