TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Eventual Consistency isn’t for Streaming

165 点作者 arjunnarayan将近 5 年前

13 条评论

kqr将近 5 年前
I agree with the other commenter. Eventual consistency has always been roughly a synonym for &quot;tactical lack of consistency.&quot; The reason this works is that inconsistency is, in many business domains, not such a big deal as we make it out to be. Most business are used to data lagging behind, documents being filed incorrectly, decisions being changed and half of documents referring to the old decision, to mention just a few possibilities. As long as everything is dated and there are corroborating versions of all facts, this can be untangled by experts in the few cases it really matters. Most of the time, it doesn&#x27;t matter that much.<p>Eventual consistency is embracing this philosophy of a lack of consistency for computer systems too, on the basis that maintaining actual consistency would be too expensive&#x2F;complex&#x2F;slow, which is frequently the case.<p>This of course, in principle, can lead to ever degrading consistency and since you can&#x27;t assume everything is consistent, you also cannot really verify consistency in any other way than heuristically, as another commenter suggested.<p>Eventual consistency is a design driven by practical needs. It is never a path to reach complete data purity.<p>And this applies both to streaming and batch tasks alike.
评论 #23832786 未加载
评论 #23832874 未加载
评论 #23837314 未加载
asdfasgasdgasdg将近 5 年前
This article isn&#x27;t very convincing to me. I mean, I one hundred percent buy that eventually consistent stream processing systems can theoretically be subject to unbounded error. But eventual consistency isn&#x27;t just a theoretical model. It&#x27;s also a practical engineering decision, and so in order to evaluate its use for any given business purpose we have to see how it performs in practice. That is, what is the average&#x2F;99.9%&#x2F;max error? And we have to understand how business-critical the correct answer is. This article has some great examples of theoretical issues with eventually consistent stream processing computation, but it doesn&#x27;t demonstrate that any real systems evince these problems under any given workload.
评论 #23832481 未加载
cs702将近 5 年前
For more concise and precise explanations of the rationale for these kinds of tools, see this paper: <a href="https:&#x2F;&#x2F;github.com&#x2F;TimelyDataflow&#x2F;differential-dataflow&#x2F;raw&#x2F;master&#x2F;differentialdataflow.pdf" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;TimelyDataflow&#x2F;differential-dataflow&#x2F;raw&#x2F;...</a> -- here&#x27;s the abstract:<p><i>&gt; Existing computational models for processing continuously changing input data are unable to efficiently support iterative queries except in limited special cases. This makes it difficult to perform complex tasks, such as social-graph analysis on changing data at interactive timescales, which would greatly benefit those analyzing the behavior of services like Twitter. In this paper we introduce a new model called differential computation, which extends traditional incremental computation to allow arbitrarily nested iteration, and explain—with reference to a publicly available prototype system called Naiad—how differential computation can be efficiently implemented in the context of a declarative dataparallel dataflow language. The resulting system makes it easy to program previously intractable algorithms such as incrementally updated strongly connected components, and integrate them with data transformation operations to obtain practically relevant insights from real data streams.</i><p>See also this friendlier (and lengthier) online book: <a href="https:&#x2F;&#x2F;timelydataflow.github.io&#x2F;differential-dataflow&#x2F;" rel="nofollow">https:&#x2F;&#x2F;timelydataflow.github.io&#x2F;differential-dataflow&#x2F;</a>
评论 #23832653 未加载
alextheparrot将近 5 年前
I&#x27;m actually just fundamentally confused about what is being argued.<p>I&#x27;m familiar with streaming, as a concept, from the likes of Beam, Spark, Flink, Samza - they do computations over data, producing intermediate results consistent with the data seen so far. These results are, of course, not necessarily consistent with the larger world because there could be unprocessed or late events in a stream, but they are consistent with the part of the world seen so far.<p>The advantage of streaming is the ability to compute and expose intermediate snapshots of the world that don&#x27;t rely on the stream closing (As many streams found in reality are not bounded, meaning intermediate results are the only realizable result set). These intermediate results can have value, but that depends on the problem statement.<p>To examine one of the examples, let&#x27;s use example 2, this aligns with the idea that we actually don&#x27;t have a traditional streaming problem. The question being asked is &quot;What is the key which contains the maximum value&quot;. There is a difference between asking &quot;What is the maximum so far today&quot; and &quot;What was the maximum result today&quot; -- the tense change is important because in the former the user cares about the results as they exist in the present moment, whereas the other cares about a view of the world in a time frame that is complete. It seems like the idea of &quot;consistent&quot; is being conflated with &quot;complete&quot;, wherein &quot;complete&quot; is not a guaranteed feature of an input stream.<p>If anyone could clarify why the examples here isn&#x27;t just a case of expecting bounded vs unbounded streams?
评论 #23833102 未加载
评论 #23834659 未加载
nikhilsimha将近 5 年前
In both examples 2 and 3, the author reads the same stream twice <i>independently</i> and assumes that a join is not synchronized between the transformed streams. This seems like a fundamental flaw in their offering.<p>Pushing in a timestamp along with the max&#x2F;variance change stream[1]. And then using the timestamp to synchronize the join[2] would naturally produce a consistent output stream.<p>I quoted flink because they have the best docs around. But it should be possible in most streaming systems. Disclaimer, I used to work for the fb streaming group and have collaborated with the flink team very briefly.<p>[1] <a href="https:&#x2F;&#x2F;ci.apache.org&#x2F;projects&#x2F;flink&#x2F;flink-docs-stable&#x2F;dev&#x2F;table&#x2F;streaming&#x2F;dynamic_tables.html#table-to-stream-conversion" rel="nofollow">https:&#x2F;&#x2F;ci.apache.org&#x2F;projects&#x2F;flink&#x2F;flink-docs-stable&#x2F;dev&#x2F;t...</a><p>[2] <a href="https:&#x2F;&#x2F;ci.apache.org&#x2F;projects&#x2F;flink&#x2F;flink-docs-release-1.11&#x2F;dev&#x2F;table&#x2F;streaming&#x2F;joins.html#event-time-temporal-joins" rel="nofollow">https:&#x2F;&#x2F;ci.apache.org&#x2F;projects&#x2F;flink&#x2F;flink-docs-release-1.11...</a>
评论 #23837274 未加载
dekimir将近 5 年前
&gt; you should be prepared for your results to be never-consistent<p>Isn&#x27;t this a core feature of distributed systems? How can you be &quot;consistent&quot; if there&#x27;s a network failure between some writer and the stream? How can you tell a network failure from a network delay? How can you tell a network delay from any other delay?<p>And finally, how can you even talk about &quot;up-to-date&quot; data if the reader doesn&#x27;t provide their &quot;date&quot; (ie, a logical timestamp)?
评论 #23834761 未加载
评论 #23837383 未加载
anonymousDan将近 5 年前
There&#x27;s been plenty of work in the past on weaker correctness guarantees for stream processing system (e.g. concepts like rollback and gap recovery from Aurora). Not sure it&#x27;s an either&#x2F;or between eventually consistent and strong consistency.
satyrnein将近 5 年前
Side question - has anyone tried using Materialize beyond toy workloads? Can I move billions of rows off of a batch workflow on Snowflake onto Materialize and suddenly everything is near realtime?
DevKoala将近 5 年前
I keep falling for these clickbait titles in the hopes I will find a fair argument. However, the moment I realize the article is trying to sell me a product based around an argument, I lose faith on the perspective of the writer.<p>If the title was something more honest such as “How product X solves for Y” I’d feel more compelled to put trust on the analysis being objective.
tlarkworthy将近 5 年前
Firebase provides causal consistency. By subscribing to streams (listen), the client opts into which data sources it was consistent snapshots of, then all distinct client streams are bundled up and delivered in order over the wire. It&#x27;s a very elegant model which does not get in the way and has nice ergonomics.
andrekandre将近 5 年前
so, if i understand the article correctly, for purposes of realtime reporting&#x2F;monitoring (streaming, as stated), eventual consistency is not an appropriate &quot;store&quot; to hook into because you cant know when things have become consitstent, and reliable streaming of (near?) realtime data requires some chance for that to occur<p>is that a correct interpretation?
erikerikson将近 5 年前
TL;DR: accessing materializations is necessarily a snapshot.<p>This article reads as though the author hadn&#x27;t shifted mindset from &quot;the database will solve it for me&quot; to &quot;I&#x27;m taking on the relevant subset of problems in my use case&quot;. This seems off given that they&#x27;re trying to sell a streaming product. They claim their product avoids problems by offering &quot;always correct&quot; answers which requires a footnote at the very least but none was given.<p>Point of note: The consistency guarantee is that upon processing to the same offset in the log that, given that you have taken no other non-constant input, you will have the same computational result as all other processes executing semantically equivalent code.<p>I take this sort of comment as abusive of the reader:<p>&gt; What does a naive application of eventual consistency have to say about &gt; &gt; -- count the records in `data` &gt; select count(*) from data &gt; &gt; It’s not really clear, is it?<p>A naive application of eventual consistency declares that along some equivalent of a Lamport time stamp across the offsets of shards in the stream, the system will calculate account of records in data as of that offset. Given the ongoing transmission of events that can alter the set data, that value will continue changing as appropriate and in a manner consistent with the data it processes. The new answers will be given when the query is run again or it may even issue an ongoing stream of updates to that value.<p>Maybe it got better as the article went on...
评论 #23837942 未加载
评论 #23834996 未加载
ecopoesis将近 5 年前
Almost every distributed system (including &quot;simple&quot; client-server systems) is eventually consistent. And all systems are distributed.<p>It&#x27;s great that your DB is ACID and anyone who queries it gets the latest greatest but in reality you also have out of date caches, ORM models that haven&#x27;t been persisted, apps where users modifying data that hasn&#x27;t been pushed back to the server and a million other examples.<p>I&#x27;m sure it&#x27;s possible to create a consistent system but I&#x27;m also sure it&#x27;s not practical. No one does it.<p>Instead of constantly fighting eventual consistency just learn to embrace it and its shortcomings. Design systems and write code that are resilient to splits in HEAD and provide easy methods to merge back to a single truth.
评论 #23834377 未加载
评论 #23836038 未加载
评论 #23833670 未加载
评论 #23834871 未加载