TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Delivering Billions of Messages Exactly Once

492 点作者 fouadmatin将近 8 年前

35 条评论

newobj将近 8 年前
I don&#x27;t want to ever see the phrase &quot;Exactly Once&quot; without several asterisks behind it. It might be exactly once from an &quot;overall&quot; point of view, but the client effectively needs infinitely durable infinite memory to perform the &quot;distributed transaction&quot; of acting on the message and responding to the server.<p>Imagine:<p>- Server delivers message M<p>- Client process event E entailed by message M<p>- Client tries to ack (A) message on server, but &quot;packet loss&quot;<p>- To make matters worse, let&#x27;s say the client also immediately dies after this<p>How do you handle this situation? The client must transactionally&#x2F;simultaneously commit both E and A&#x2F;intent-to-A. Since the server never received an acknowledgment of M, it will either redeliver the message, in which case some record of E must be kept to deduplicate on, or it will wait for client to resend A, or some mixture of both. Note: if you say &quot;just make E idempotent&quot;, then you don&#x27;t need exactly-once delivery in the first place...<p>I suppose you could go back to some kind of lock-step processing of messages to avoid needing to record all (E,A) that are in flight, but that would obviously kill throughput of the message queue.<p>Exactly Once can only ever be At Least Once with some out-of-the-box idempotency that may not be as cheap as the natural idempotency of your system.<p>EDIT: Recommended reading: &quot;Life Beyond Distributed Transactions&quot;, Pat Helland - <a href="http:&#x2F;&#x2F;queue.acm.org&#x2F;detail.cfm?id=3025012" rel="nofollow">http:&#x2F;&#x2F;queue.acm.org&#x2F;detail.cfm?id=3025012</a>
评论 #14666583 未加载
评论 #14665337 未加载
评论 #14665590 未加载
评论 #14665868 未加载
alexandercrohde将近 8 年前
Here&#x27;s a radical solution. Instead of becoming a scala pro akka stream 200k engineer with a cluster of kafka nodes that costs your company over $100,000 of engineering time, technical debt, opportunity cost, and server costs, just put it all in bigtable, with deduping by id....<p>Enough of resume-driven-engineering, why does every need to reinvent the wheel?
评论 #14668290 未加载
bmsatierf将近 8 年前
In terms of connectivity, we deal with a similar problem here at CloudWalk to process payment transactions from POS terminals, where most of them rely on GPRS connections.<p>Our network issues are nearly 6 times higher (~3.5%) due to GPRS, and we solved the duplication problem with an approach involving both client and server side.<p>Clients would always ensure that all the information sent by the server was successfully received. If something goes wrong, instead of retrying (sending the payment again), the client sends just the transaction UUID to the server, and the server might either respond with: A. the corresponding response for the transaction or B. not found.<p>In the scenario A, the POS terminal managed to properly send all the information to the server but failed to receive the response.<p>In the scenario B, the POS terminal didn&#x27;t even manage to properly send the information to the server, so the POS can safely retry.
评论 #14671574 未加载
评论 #14668906 未加载
falcolas将近 8 年前
So, a combination of a best effort &quot;at least once&quot; messaging with deduplication near the receiving edge. Fairly standard, honestly.<p>There is still a potential for problems in the message delivery to the endpoints (malformed messages, Kafka errors, messages not being consumed fast enough and lost), or duplication at that level (restart a listener on the Kafka stream with the wrong message ID) as well.<p>This is based on my own pains with Kinesis and Lambda (which, I know, isn&#x27;t Kafka).<p>In my experience, better to just allow raw &quot;at least once&quot; messaging and perform idempotant actions based off the messages. It&#x27;s not always possible (and harder when it is possible), but its tradeoffs mean you&#x27;re less likely to lose messages.
评论 #14665388 未加载
travisjeffery将近 8 年前
Kafka 0.11 (recently released) has exactly once semantics and transactional messages built-in.<p>- Talk from Kafka Summit: <a href="https:&#x2F;&#x2F;www.confluent.io&#x2F;kafka-summit-nyc17&#x2F;resource&#x2F;#exactly-once-semantics_slide" rel="nofollow">https:&#x2F;&#x2F;www.confluent.io&#x2F;kafka-summit-nyc17&#x2F;resource&#x2F;#exactl...</a><p>- Proposal: <a href="https:&#x2F;&#x2F;cwiki.apache.org&#x2F;confluence&#x2F;display&#x2F;KAFKA&#x2F;KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging" rel="nofollow">https:&#x2F;&#x2F;cwiki.apache.org&#x2F;confluence&#x2F;display&#x2F;KAFKA&#x2F;KIP-98+-+E...</a>
评论 #14669802 未加载
ju-st将近 8 年前
52 requests, 5.4 MB and 8.63 seconds to load a simple blog post. With a bonus XHR request every 5 seconds.
评论 #14665657 未加载
评论 #14665459 未加载
评论 #14665738 未加载
StreamBright将近 8 年前
&quot;The single requirement of all data pipelines is that they cannot lose data.&quot;<p>Unless the business value of data is derived after applying some summary statistics, than even sampling the data works, and you can lose events in an event stream, while not changing the insight gained. Originally Kafka was designed to be a high throughput data bus for analytical pipeline where losing messages was ok. More recently they are experimenting with exactly once delivery.
评论 #14671973 未加载
skMed将近 8 年前
Having built something similar with RabbitMQ in a high-volume industry, there are a lot of benefits people in this thread seem to be glossing over and are instead debating semantics. Yes, this is not &quot;exactly once&quot; -- there really is no such thing in a distributed system. The best you can hope for is that your edge consumers are idempotent.<p>There is a lot of value derived from de-duping near ingress of a heavy stream such as this. You&#x27;re saving downstream consumers time (money) and potential headaches. You may be in an industry where duplicates <i>can</i> be handled by a legacy system, but it takes 5-10 minutes of manual checks and corrections by support staff. That was my exact use case and I can&#x27;t count the number of times we were thankful our de-duping handled &quot;most&quot; cases.
sethammons将近 8 年前
&quot;Exactly Once<i>&quot;<p></i> Over a window of time that changes depending on the amount of ingested events.<p>Basically, they read from a kafka stream and have a deduplication layer in rocks db that produces to another kafka stream. They process about 2.2 billion events through it per day.<p>While this will reduce duplicates and get closer to Exactly Once (helping reduce the two generals problem on incoming requests and potentially work inside their data center), they still have to face the same problem again when they push data out to their partners. Some packet loss, and they will be sending out duplicate to the partner.<p>Not to downplay what they have done as we are doing a similar thing near our exit nodes to do our best to prevent duplicate events making it out of our system.
incan1275将近 8 年前
To be fair, they are upfront in the beginning about not being able to adhere to an exactly-once model.<p>&quot;In the past three months we’ve built an entirely new de-duplication system to get as close as possible to exactly-once delivery&quot;<p>What&#x27;s annoying is that they do not get precise and formal about what they want out of their new model. Also, their numbers only speak to performance, not correctness.<p>On the plus side, I think it&#x27;s awesome to see bloom filters successfully used in production. That sort of thing is easy to implement, but not easy to get right for every use case.
openasocket将近 8 年前
So there&#x27;s a lot of talk on here about the Two Generals Problem, so I thought I&#x27;d chime in with some misconceptions about how the Two Generals Problem relates to Exactly Once Messaging (EOM). WARNING: I&#x27;m going mostly on memory with this, I could be completely wrong.<p>EOM is NOT strictly speaking equivalent to the Two Generals Problem, or Distributed Consensus, in an unreliable network. In distributed consensus, at some given point in time, A has to know X, A has to know B knows X, A has to know B knows A knows X, ... It has to do with the fact that the message broker is in some sense the arbitrator of truth, so the consumer(s) don&#x27;t need full consensus. In an unreliable network, you can have EOM. <a href="http:&#x2F;&#x2F;ilpubs.stanford.edu:8090&#x2F;483&#x2F;1&#x2F;2000-7.pdf" rel="nofollow">http:&#x2F;&#x2F;ilpubs.stanford.edu:8090&#x2F;483&#x2F;1&#x2F;2000-7.pdf</a> gives some examples of how that works.<p>HOWEVER, you can&#x27;t have EOM when the consumers can fail. If a consumer fails there&#x27;s no way, in general, to tell if the last message it was working on was completed.<p>There are a couple of edge cases where you can still have EOM. For instance, a system where you have a message broker A, and a bunch of consumers that read messages x from that queue, compute f(x), and insert f(x) onto message broker B, where f(x) may be computed multiple times for the same x (i.e. if f is a pure function or you don&#x27;t care about the side effects). This system can implement EOM in the presence of an unreliable network and consumer failures (I think it can handle one or both of the message brokers failing too, not 100% sure) in the sense that x will never be in broker A at the same time as f(x) is in broker B, f(x) will never be in broker B more than once for the same x, and any y in B had some x that was in A such that y = f(x).
siliconc0w将近 8 年前
Was thinking a &#x27;reverse bloom filter&#x27; could be cool to possibly avoid the RocksDB for situations like this- turns out it already exists: <a href="https:&#x2F;&#x2F;github.com&#x2F;jmhodges&#x2F;opposite_of_a_bloom_filter" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jmhodges&#x2F;opposite_of_a_bloom_filter</a><p>I love it when that happens.
评论 #14668378 未加载
pfarnsworth将近 8 年前
Sounds very cool. A couple of questions I had:<p>1) What happens if they lose their rocksdb with all of the messageIds?<p>2) Is their kafka atleast-once delivery? How do they guarantee that kafka doesn&#x27;t reject their write? Also, assuming they have set up their kafka for at least once delivery, doesn&#x27;t that make the output topic susceptible to duplicates due to retries, etc?<p>3) &gt;Instead of searching a central database for whether we’ve seen a key amongst hundreds of billions of messages, we’re able to narrow our search space by orders of magnitude simply by routing to the right partition.<p>Is &quot;orders of magnitude&quot; really correct? Aren&#x27;t you really just narrowing the search space by the number of partitions in kafka? I suppose if you have a hundred partitions, that would be 2 orders of magnitude, but it makes it sound like it&#x27;s much more than that.
评论 #14670426 未加载
squiguy7将近 8 年前
I wonder how they partition by &quot;messageID&quot; they use to ensure that the de-duplication happens on the same worker. I would imagine that this affects their ability to add more brokers in the future.<p>Perhaps they expect a 1:1 mapping of RocksDB, partition, and de-duplication worker.
评论 #14667190 未加载
评论 #14665744 未加载
philovivero将近 8 年前
tl;dr: Clickbait headline. Exactly-once delivery not even close to implemented. Typical de-duping, as you&#x27;ve seen and read about hundreds of times already, is what they did.
ratherbefuddled将近 8 年前
&quot;Almost Exactly Once&quot; doesn&#x27;t have quite the same ring to it, but it is actually accurate. We&#x27;ve already discovered better trade-offs haven&#x27;t we?
iampims将近 8 年前
If the OP doesn&#x27;t mind expanding a little on this bit, I&#x27;d be grateful.<p>&gt; If the dedupe worker crashes for any reason or encounters an error from Kafka, when it re-starts it will first consult the “source of truth” for whether an event was published: the output topic.<p>Does this mean that &quot;on worker crash&quot; the worker replays the entire output topic and compare it to the rocksdb dataset?<p>Also, how do you handle scaling up or down the number of workers&#x2F;partitions?
评论 #14664910 未加载
qsymmachus将近 8 年前
It&#x27;s funny, at my company we implemented deduplication almost exactly the same way for our push notification sender.<p>The scale is smaller (about 10k rpm), but the basic idea is the same (store a message ID in a key-value store after each successful send).<p>I like the idea of invalidating records by overall size, we hadn&#x27;t thought of that. We just use a fixed 24-hour TTL.
wonderwonder将近 8 年前
Would something like AWS SQS not scale for something like this? We currently push about 25k daily transactions over SQS, obviously no where near the scale of this, just wondering about what limitations we will bump into potentially.
评论 #14664960 未加载
评论 #14664836 未加载
评论 #14664859 未加载
linkmotif将近 8 年前
It&#x27;s worth noting that the next major Kafka release (0.11, out soon) will include exactly once semantics! With basically no configuration and no code changes for the user. Perhaps even more noteworthy is this feature is built on top of a new transactions feature [0]. With this release, you&#x27;ll be able to atomically write to multiple topics.<p>[0] <a href="https:&#x2F;&#x2F;cwiki.apache.org&#x2F;confluence&#x2F;display&#x2F;KAFKA&#x2F;KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging" rel="nofollow">https:&#x2F;&#x2F;cwiki.apache.org&#x2F;confluence&#x2F;display&#x2F;KAFKA&#x2F;KIP-98+-+E...</a>
ggcampinho将近 8 年前
Isn&#x27;t the new feature of Kafka about this?<p><a href="https:&#x2F;&#x2F;issues.apache.org&#x2F;jira&#x2F;browse&#x2F;KAFKA-4815" rel="nofollow">https:&#x2F;&#x2F;issues.apache.org&#x2F;jira&#x2F;browse&#x2F;KAFKA-4815</a>
评论 #14664833 未加载
robotresearcher将近 8 年前
&gt; [I]t’s pretty much impossible to have messages only ever be delivered once.<p>IIRC, it&#x27;s provably impossible in a distributed system where processes might fail, i.e. all real systems.
jkestelyn将近 8 年前
Relevant to this topic: Description of exactly-once implementation in Google Cloud Dataflow + what &quot;exactly once&quot; means in context of streaming:<p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;big-data&#x2F;2017&#x2F;05&#x2F;after-lambda-exactly-once-processing-in-google-cloud-dataflow-part-1" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;big-data&#x2F;2017&#x2F;05&#x2F;after-lambda-...</a><p>(Google Cloud emp speaking)
kortox将近 8 年前
With deduplication state on the worker nodes, how does scaling up, or provisioning new machines, or moving a partition between machines work?
vgt将近 8 年前
Qubit&#x27;s strategy to do this via streaming, leveraging Google Cloud Dataflow:<p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;big-data&#x2F;2017&#x2F;06&#x2F;how-qubit-deduplicates-streaming-data-at-scale-with-google-cloud-platform" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;big-data&#x2F;2017&#x2F;06&#x2F;how-qubit-ded...</a>
majidazimi将近 8 年前
What is so exciting about this? There is still possibility of duplicates. You still have to put the engineering effort to deal with duplicates end-to-end. If the code is there to deal with duplicates end-to-end, then does it really matter to have 5 duplicates or 35? Or may be they just did it to add some useful cool-tech in to CV?
评论 #14670417 未加载
redmalang将近 8 年前
Another possible approach: <a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;big-data&#x2F;2017&#x2F;06&#x2F;how-qubit-deduplicates-streaming-data-at-scale-with-google-cloud-platform" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;blog&#x2F;big-data&#x2F;2017&#x2F;06&#x2F;how-qubit-ded...</a>
gsmethells将近 8 年前
Why do I get the feeling this is repeating TCP features at the Message level? There must a protocol that can hide this exactly once need away. TCP doesn&#x27;t create downloads, generally, that are bad and fail their checksum test, hence packets that make up the file are not duplicated.
评论 #14665548 未加载
评论 #14665370 未加载
luord将近 8 年前
This is interesting work. But I think I&#x27;ll continue relying on at least once and idempotency. Exactly once is impossible anyway.<p>&gt; In Python (aka pseudo-pseudocode)<p>This annoyed probably more than it should have.
spullara将近 8 年前
This isn&#x27;t the solution I would architect. It is much easier to de-duplicate when processing your analytics workload later and you don&#x27;t need to do so much work.
评论 #14667733 未加载
PinguTS将近 8 年前
That reminds me of the safety-related protocols we use since years in embedded electronics like rail-road signaling, medical, and other areas.
stratosgear将近 8 年前
Site seems to be down. Any ideas how big these HN hugs of death usually are? How big of a traffic spike brings these servers down?
评论 #14664992 未加载
评论 #14664776 未加载
评论 #14664755 未加载
mooneater将近 8 年前
Awesome story. What I would like to hear more about, is the people side. The teams and personalities involved with coming up with this new system and the transition.
mamon将近 8 年前
&quot;Exactly once&quot; model of message is theoretically impossible to do in distributed environment with nonzero possibility of failure. If you haven&#x27;t received acknowledgement from the other side of communication in the specified amount of time you can only do one of two things:<p>1) do nothing, risking message loss<p>2) retransmit, risking duplication<p>But of course that&#x27;s only from messaging system point of view. Deduplication at receiver end can help reduce problem, but itself can fail (there is no foolproof way of implementing that pseudocode&#x27;s &quot;has_seen(message.id)&quot; method)
评论 #14664964 未加载
评论 #14664983 未加载
评论 #14664875 未加载
评论 #14664993 未加载
评论 #14666005 未加载
throwaway67将近 8 年前
... or they could have used BigQuery with a primary key on message ID.
评论 #14666695 未加载