TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Lakehouse: New Open Platforms That Unify Data Warehousing and Advanced Analytics [pdf]

36 pointsby solidangleover 4 years ago

3 comments

georgewfraserover 4 years ago
What is not said in this article is that you can use modern data warehouses, like Snowflake and BigQuery, in the exact same way: a single system that serves as both your data lake and your data warehouse. Databricks and the cloud data warehouses are rapidly converging. Databricks has enough SQL functionality that it can be reasonably be called an RDBMS, and Snowflake has demonstrated that you can incorporate the benefits of a data lake into a data warehouse by separating compute from storage. At this point, the main difference is that with Databricks you can directly access the underlying Parquet files in S3. Does that matter? For some users, yes.
评论 #25509998 未加载
MrPowersover 4 years ago
Some additional context:<p>* Companies are querying thousands &#x2F; tens of thousands of Parquet files stored in the cloud via Spark<p>* Parquet lakes can be partitioned which works well for queries that filter on the partitioned key (and slows down queries that don&#x27;t filter on the partition key)<p>* Parquet files contain min&#x2F;max metadata for all columns. When possible, entire files are skipped, but this is relatively rare. This is called predicate pushdown filtering.<p>* Parquet files allow for the addition of custom metadata, but Spark doesn&#x27;t let users use the custom metadata when filtering<p>* Spark is generally bad at joining two big tables (it&#x27;s good at broadcast joins, generally when one of the tables is 2GB or less)<p>* Companies like Snowflake &amp; Memsql have Spark connectors that let certain parts of queries get pushed down.<p>There is a huge opportunity to build a massive company on data lakes optimized for Spark. The amount of wasted compute cycles filtering over files that don&#x27;t have any data relevant to the query is staggering.
评论 #25510550 未加载
评论 #25509960 未加载
评论 #25509911 未加载
Zaheerover 4 years ago
I thought this was a good article on building a Lakehouse on AWS: <a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;harness-the-power-of-your-data-with-aws-analytics&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;harness-the-power-of-y...</a><p>It&#x27;s high level and focuses on some of the business needs for requiring this sort of architecture.