Some of these complaints seem fair to me, some not as much. tl;dr -- Snowflake requires a fair bit of knowledge/effort to use optimally.<p>I spent a number of months last year focused on lowering Snowflake spend. In the process I learned a ton about Snowflake and gained a fair amount of respect for the product. Respect as in "this is really great" as well as respect as in "I need to be on guard here or I'm going to get hurt."<p>I think my biggest misconception at the outset was thinking of Snowflake like it's a relational database. It's not. Or rather, it is with a large number of caveats. Snowflake doesn't have b-tree indexes -- rather it has "clustering keys," which are sort of like coarse grained indexes that colocate data in micropartions, allowing queries to do micropartition pruning. If you have a well clustered table and you're filtering on your clustering keys, things will be great. But if not, or, for example you have to do multi-table joins on non-clustered columns, you'll suffer. So unless you have search optimization enabled (which costs more!), you have to retrain yourself away from "oh, just add an index here or there to make things fast" type of thinking you may have had working with Postgres or whatnot.<p>Regarding the author's complaints about lack of observability, I generally found it pretty easy to analyze what was going on via the query_history table. And the built in query analyzer is quite helpful. We did add tags to our dbt runs, which was pretty easy, and I wrote a handful of queries to find like the most expensive dbt models. It wasn't really that hard.<p>That said, dbt in particular provides a number of foot guns wrt Snowflake. Subqueries, as the author mentions, is one. We created some custom dbt macros to do things like instead of `select * from foo where x in (select * from blah)` -- if blah was small -- do a query on blah and write the query using a literal list, like `select * from foo where x in ('a', 'b', 'c', 'etc...').<p>Another issue we discovered is that in dbt it's trivial to create views. But we found that if views get too deeply nested, Snowflake can't adequately do predicate pushdown. So big stacks of views on views are suboptimal.<p>Another interesting one was tests. Dbt makes it trivial to perform null or uniqueness checks against a column. We found we were spending a lot on those tests that simply were doing something like `select * from blah where col is null`. On non-cluster key columns or complex views, these were causing full table scans. We took a number of steps to mitigate those issues. (Combining queries; changing where we did these checks in the dag). The way tests are scheduled is problematic as well. One "long pole" test will keep your warehouse up and using credits even after the other 99.9% of the tests have completed. After some analysis we separated long pole tests from the others and put them on different warehouses.<p>I could go on and on, actually, but I think that provides a taste of some of the complexities involved. Like almost any tool, you have to really understand it to use it effectively. But it's all too easy for, say, analysts, who may be blissfully unaware of the issues above, to write really poorly performing SQL on Snowflake.