TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Stream processing engine that joins to historical data in Snowflake?

3 点作者 remolacha大约 1 年前
Hey, I&#x27;m looking for advice on stream processing engines that allow me to do the following:<p>1. Required: Write a query that joins an event stream with a historical table in Snowflake 2. Required: Executes in near-real time &lt; 5s even if a query involves 300M rows 3. Highly desired: Gives me a way of doing dbt-like DAGs, where I can execute a DAG of actions (including external api calls) based on results of the query 4. Highly desired: allows me to write queries in standard SQL 5. Desired: true real time (big queries executing w&#x2F; subsecond latency)<p>What are the best options out there? It seems like Apache Flink enables this, but there also seem to be a number of other projects out there that may enable some or all of what I&#x27;m describing, including:<p>- kSQL - Arroyo - Proton - Kafka Streams - Snowflake&#x27;s Snowpipe Streaming - Benthos - RisingWave - Spark Streaming - Apache Beam - Timely Dataflow and derivatives (Materialize, Bytewax, etc.)<p>Any recommendations on the best tool for the job? Are there interesting alternatives that I haven&#x27;t named?

3 条评论

chucklarrieu大约 1 年前
That’s not how stream processing works? It’s absurdly costly to query snowflake for every record that streams in. So then you try to cache the snowflake dataset. But then the cache becomes stale. Honestly this sounds like an XY problem — the problem is stated as X but you’re really trying to solve something else Y.<p>What is the business requirement? What are the technical specifications required to meet the need? From there, we can start to consider architecture solutions.<p>Storage is relatively cheap compared to compute. Most stream processors require or at least highly encourage you to provide them direct access to the input data instead of calling an external system.
amath大约 1 年前
Sounds like you are writing a pipeline to enrich a stream of data with historical data from Snowflake. This is a fairly common pattern. If the data in snowflake is not changing often, you would want to cache that somewhere for use with the stream processor to avoid that query overhead and speed things up.<p>Most of those listed can meet your first 2 requirements. Looking further down the list, your requirement of SQL and a DAG type of representation will limit the list to only a few. I don&#x27;t know if many of those listed provide both of those capabilities.<p>If you relax the SQL constraint, more of them are applicable like Bytewax and Kafka-streams.
zX41ZdbW大约 1 年前
Is it possible to pull historical data out of Snowflake? If it is an option - you can place it in ClickHouse and use a materialized view to JOIN with the historical table during insertion into the real-time table.