If you're looking to give Iceberg a spin, here's how to get it running locally, on AWS[0] and on GCP[1]. The posts use DuckDB as the query engine, but you could swap in Trino (or even chdb / clickhouse).<p>0 - <a href="https://www.definite.app/blog/cloud-iceberg-duckdb-aws" rel="nofollow">https://www.definite.app/blog/cloud-iceberg-duckdb-aws</a><p>1 - <a href="https://www.definite.app/blog/cloud-iceberg-duckdb" rel="nofollow">https://www.definite.app/blog/cloud-iceberg-duckdb</a>
I think iceberg solves a lot of big data problems, for handling huge amounts of data on blob storage, including partitioning, compaction and ACID semantics.<p>I really like the way the catalog standard can decouple underlying storage as well.<p>My biggest concern is how inaccessible the implementations are, Java / spark has the only mature implementation right now,<p>Even DuckDB doesn’t support writing yet.<p>I built out a tool to stream data to iceberg which uses the python iceberg client:<p><a href="https://www.linkedin.com/pulse/streaming-iceberg-using-sqlflow-turbolytics-d71pe/" rel="nofollow">https://www.linkedin.com/pulse/streaming-iceberg-using-sqlfl...</a>
Hidden partitioning is the most interesting Iceberg feature, because most of the very large datasets are timeseries fact tables.<p>I don't remember seeing that in Delta Lake [1], which is probably because the industry standard benchmarks use date as a column (tpc-h) or join date as a dimension table (tpc-ds) and do not use timestamp ranges instead of dates.<p>[1] - <a href="https://github.com/delta-io/delta/issues/490">https://github.com/delta-io/delta/issues/490</a>
Apache Iceberg is one of the emerging Open Table Formats in addition to Delta Lake and Apache Hudi [1].<p>[1] Open Table Formats:<p><a href="https://www.starburst.io/data-glossary/open-table-formats/" rel="nofollow">https://www.starburst.io/data-glossary/open-table-formats/</a>
ClickHouse has a solid Iceberg integration. It has an Iceberg table function[0] and Iceberg table engine[1] for interacting with Iceberg data stored in s3, gcs, azure, hadoop etc.<p>[0] <a href="https://clickhouse.com/docs/en/sql-reference/table-functions/iceberg" rel="nofollow">https://clickhouse.com/docs/en/sql-reference/table-functions...</a><p>[1] <a href="https://clickhouse.com/docs/en/engines/table-engines/integrations/iceberg" rel="nofollow">https://clickhouse.com/docs/en/engines/table-engines/integra...</a>
How do you query your iceberg tables? We are looking into moving away from Bigquery and Starrocks [1] looks like a good option.<p>[1] <a href="https://www.starrocks.io/" rel="nofollow">https://www.starrocks.io/</a>
What I like about iceberg is that the partitions of the tables are not tightly coupled to the subfolder structure of the storage layer (at least logically, at the end of the day the partitions are still subfolders with files), but at least the metadata is not tied to that, so you can change the partition of the tables going forward and still query a mix of old and new partitions time ranges.<p>In the other hand, since one of the use cases they created it at Netflix was to consume directly from real time systems, the management of the file creation when updates to the data is less trivial (the CoW vs MoR problem and how to compact small files) which becomes important on multi-petabytes tables with lots of users and frequent updates. This is something I assume not a lot companies put a lot of attention to (heck, not even at Netflix) and have big performance and cost implications.
I've been looking at Iceberg for a while, but in the end went with Delta Lake because it doesn't have a dependency on a catalog. It also has good support for reading and writing from it without needing Spark.<p>Does anyone know if Iceberg has plans to support similar use cases?
I am stockholder in snowflake and iceberg's ascendance seems to coincide with snow's downfall.<p>Is the query engine value add justify snowflake's valuation. Their data marketplace thing didn't seem to have actually worked.
Are there robust non-JVM based implementations for Iceberg currently? Sorry to say, but recommending JVM ecosystems around large data just feels like professional malpractice at this point. Whether deployment complexity, resource overhead, tool sprawl or operational complexity the ecosystem seems to attract people who solve only 50% of the problem and have another tool to solve the rest, which in turn only solves 50% etc.. ad infinitum. The popularity of solutions like Snowflake, Clickhouse, or DuckDB is not an accident and is the direction everything should go. I hear Snowflake will adopt this in the future, that is good news.
OneHouse also has a fantastic iceberg implementation (they're the team behind Apache Hudi) and does a ton of great interop work: <a href="https://www.onehouse.ai/blog/comprehensive-data-catalog-comparison" rel="nofollow">https://www.onehouse.ai/blog/comprehensive-data-catalog-comp...</a> && <a href="https://www.onehouse.ai/blog/open-data-foundations-with-apache-xtable-hudi-delta-and-iceberg-interoperability" rel="nofollow">https://www.onehouse.ai/blog/open-data-foundations-with-apac...</a>
In order to get good query performance from Iceberg, we have to run compaction frequently. Compaction turns out to be very expensive. Any tip to minimize compaction while keeping queries fast?