I think the competition for the future is between DuckDB and Polars. Will we stick with the DataFrame model, made feasible by Polars's lazy execution, or will we go with in-process SQL a la DuckDB? Personally I've been using DuckDB because I already know SQL (and DuckDB provides persistence if I need it) and don't want to learn a new DataFrame DSL but I'd love to hear other the experience of other people.
I am just using duckdb on a 3TB dataset in a beefy ec2, and am pleasantly surprised at its performance on such a large table. I had to do some sharding to be sure but am able to match performance of snowflake or other cluster based systems using this single machine instance.<p>To clarify Clickhouse will likely match this performance as well, but doing things on a single machines look sexier to me than it ever did in decades.
The test case of a simple aggregation is a good example of an important data science skill knowing when and here to use a given tool, and that there is no one right answer for all cases. Although it's worth noting that DuckDB and polars are comparable performance-wise for aggregation (DuckDB slightly faster: <a href="https://duckdblabs.github.io/db-benchmark/" rel="nofollow">https://duckdblabs.github.io/db-benchmark/</a> ).<p>For my cases with polars and function piping, certain aspects of that workflow are hard to represent in SQL, and additionally it's easier for iteration/testing on a given aggregation to add/remove a given function pipe, and to relate to existing tables (e.g. filter a table to only IDs present in a different table, which is more algorithmically efficient than a join-then-filter). To do the ETL I tend to do for my data science workin pandas/polars in SQL/DuckDB, it would require chains of CTEs or other shenanigans, which eliminates similicity and efficincy.
> Note that DuckDB automatically figured out how to parse the date column.<p>It kinda did and it kinda didn't. Author got lucky that Transaction.csv contained a date where the day was after the 12th in a given month. Had there not been such a date, DuckDB would have gotten the dates wrong and read it as dd/mm/yyyy.<p>I think a warning from DuckDB would have been in order.
My biggest issue with DuckDB is its not willing to implement edits to blob storages which allow edits (Azure). Having common object/blob storages that can be interacted and operated by multiple process will make it much more amenable to many data science driven workflows.