TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Scaling Kafka at Honeycomb

166 点作者 i0exception超过 3 年前

10 条评论

rubiquity超过 3 年前
I&#x27;ve never used Kafka but this post is yet another hard earned lesson in log replication systems where storage tiering should be much higher on the hierarchy of needs than horizontal scaling of individual logs&#x2F;topics&#x2F;streams. In my experience the times when you need storage tiering something awful is already happening.<p>During network partitions or other scenarios where your disks are filling up quickly it&#x27;s much easier to reason about how to get your log healthy by aggressively offloading to tiered storage and trimming than it is to re-partition (read: reconfigure), which often requires writes to some consensus-backed metadata store, which is also likely experiencing its own issues at that time.<p>Another great benefit of storage tiering is that you can externally communicate a shorter data retention period than you actually have in practice, while you really put your recovery and replay systems through their paces to get the confidence you need. Tiered storage can also be a great place to bootstrap new nodes from.
评论 #29398565 未加载
EdwardDiego超过 3 年前
&gt; July 2019 we did a rolling restart to convert from self-packaged Kafka 0.10.0<p>Ouch, that&#x27;s a lot of fixed bugs you weren&#x27;t reaping the benefits of &gt;_&lt; What was the reason to stick on 0.10.0 for so long?<p>After we hit a few bad ones that finally convinced our sysop team to move past 0.11.x, life was far better - especially recovery speed after an unclean shutdown. Used to take two hours, dropped to like 10 minutes.<p>There was a particular bug I can&#x27;t find for the life of me that we hit about four times in one year where the replicas would get confused about where the high watermark was, and refuse to fetch from the leader. Although to be fair to Kafka 0.10.x, I think that was a bug introduced in 0.11.0. Which is where I developed my personal philosophy of &quot;never upgrade to a x.x.0 Kafka release if it can be avoided.&quot;<p>&gt; The toil of handling reassigning partitions during broker replacement by hand every time one of the instances was terminated by AWS began to grate upon us<p>I see you like Cruise Control in the Confluent Platform, did you try it earlier?<p>&gt; In October 2020, Confluent announced Confluent Platform 6.0 with Tiered Storage support<p>Tiered storage is slowly coming to FOSS Kafka, hopefully in 3.2.0, thanks to some very nice developers from AirBnB. Credit to the StreamNative team, that FOSS Pulsar has tiered storage (and schema registry) built-in.
评论 #29397166 未加载
krnaveen14超过 3 年前
Based on our experience with Apache Kafka and alternative streaming systems, Apache Pulsar natively addresses the Honeycomb&#x27;s needs.<p>- Decoupling of Broker &amp; Storage Layer<p>- Tierered Storage (SSD, HDD, S3,...)<p>We use both Kafka and Pulsar in our systems.<p>- Kafka is used for microservices communication and operational data sharing<p>- Pulsar is used for streaming large customer data in thousands of topics
评论 #29399890 未加载
评论 #29401162 未加载
评论 #29402352 未加载
评论 #29399936 未加载
评论 #29401662 未加载
评论 #29401143 未加载
juliansimioni超过 3 年前
I&#x27;m not a Kafka user but it seems very similar to my experience running Elasticsearch clusters tuned for low latency response time.<p>There&#x27;s a complicated mix of requirements for CPU, memory, disk, _and_ network speed, and meeting all of them cost effectively is a real challenge.<p>Similarly, it&#x27;s easy to build a cluster that performs well until a single node fails. The increased load per node plus the CPU and network cost of replicating data to a replacement instance can really cause trouble.<p>Elasticsearch also runs on the JVM so I&#x27;m hoping the new EC2 instance types will work for us too. They look to be really great.
rmb938超过 3 年前
Maybe I missed it but are you able to talk about how many messages a second, partition count and average message size?<p>I run a few hundred Kafka clusters with message counts per second in the tens of millions for some clusters, a few thousand partitions, message sizes around 7kb with gzip compression, and have never needed the amount of CPU and network&#x2F;disk throughput mentioned. With node counts range between ~10-25. Most of my clusters reaching those speeds at most average around 7Gbps of disk throughput per broker.<p>I have recently started running Kafka in GCP with their balanced ssd disks capping out at 1.2Gbps I&#x27;m not seeing much of a performance impact. It requires a few more brokers to reach the same throughput but not having any of the performance and scaling issues mentioned in this post.<p>My brokers are sized a bit differently than mentioned in the post as well, low amount of CPU (maximum 20ish cores) but much more memory around 248GB for my larger clusters. So maybe that has to do with it? Maybe the broker sizes that were chosen are not ideal for the workload?<p>Maybe I&#x27;ve been lucky in my setups but I would like to know a bit more. Having been running Kafka since the 0.10 days and now on 2.6 for all my clusters this type of performance problem seems a bit puzzling.
评论 #29398753 未加载
StreamBright超过 3 年前
&gt;&gt; Historically, our business requirements have meant keeping a buffer of 24 to 48 hours of data to guard against the risk of a bug in retriever corrupting customer data.<p>I have used much larger buffers before. Some bugs can lurk around for a while before noticed. For example the lack of something is much harder to notice.
评论 #29396826 未加载
bashtoni超过 3 年前
This is an awesome write up. I love reading these warts and all accounts - they&#x27;re always way more useful than the typical case study &quot;we switched to X and it saved us Y%!&quot; marketing posts.<p>One point that makes Intel not look quite so bad performance wise - based on my own benchmarking, I&#x27;m pretty sure when this article talks about cores they actually mean vCPUs. In AWS on x86, 1 vCPU is 1 hyperthread, so it&#x27;s kind of half a core. On Graviton 2, 1 vCPU is one full core, the CPUs don&#x27;t have hyperthreading. This means that you need 10 Intel cores to do the same work as 16 Graviton cores, not 20. This of course doesn&#x27;t change the cost savings from switching to arm64.
评论 #29401047 未加载
sealjam超过 3 年前
&gt; …RedPanda, a scratch backend rewrite in Rust that is client API compatible<p>I thought RedPanda was mostly C++?
评论 #29397050 未加载
throwaway984393超过 3 年前
I&#x27;m still just reading the first couple sections, but I already want to give props for an excellent write-up. You&#x27;re explaining in (mostly) plain language the purpose of Kafka, your specific application of it, and lots of great background detail and history of your implementation&#x27;s evolution. It&#x27;s also clear that whoever wrote this knows their Ops. Thank you!
评论 #29401178 未加载
mherdeg超过 3 年前
It&#x27;s funny how my bugbears from interacting with distributed async messaging (Kafka) are like 90 degrees orthogonal from the things described here:<p>(1) Occasionally have wanted to wonder <i>what the actual traffic is</i>. This takes extra software work (writing some kind of inspector tool to consume a sample message and produce a human-readable version of what&#x27;s inside it).<p>(2) Sometimes see problems which happen at the broker-partition or partition-consumer assignment level, and tools for visualizing this are really messy.<p>For example you have 200 partitions and 198 consumer threads -- this means that because of the pigeonhole principle there are 2 threads which own 2 partitions. Randomly, 1% of your data processing will take twice as long, which can be very hard to visualize.<p>Or for example 10 of your 200 partitions that are managed by broker B which, for some reason, is mishandling messages -- so 5% of messages are being handled poorly, which may not emerge in your metrics the way you expect. Viewing slowness by partition, by owning consumer, and by managing broker can be tricky to remember to do when operating the system.<p>(3) Provisioning capacity to have n-k availability (so that availability-zone-wide outages as well as deployments&#x2F;upgrades don&#x27;t hurt processing) can be tricky.<p>How many messages per second are arriving? What is the mean processing time per message? How many processors (partitions) do you need to keep up? How much <i>slack</i> do you have -- how much excess capacity is there above the typical message arrival rate, so that you can model how long it will take the cluster to process a backlog after an outage?<p>(4) Remembering how to scale up when message arrival rate feels like a bit of a chore. You have to increase the number of partitions to be able to handle the new messages ... but then you also have to remember to scale up every consumer. You did remember that, right? And you know you can&#x27;t ever reduce the partition count, right?<p>(5) I often end up wondered what the processing latency is. You can approximate this by dividing the total backlog of unprocessed messages for an entire consumer group (unit &quot;messages&quot;) by the message arrival rate (unit &quot;arriving messages per second&quot;) which gets you something that has dimensionality of &quot;seconds&quot; and represents a quasi processing lag. But the lag is often different per-partition.<p>Better is to teach the application-level consumer library to emit a metric about how long processing took and how old the message it evaluated was - then, as long as processing is still happening, you can measure delays. Both are messy metrics that need you get and remain hands-on with the data to understand them.<p>(6) There&#x27;s a complicated relationship between &quot;processing time per message&quot; and effective capacity -- any application changes which make a Kafka consumer slower may not have immediate effects on end-to-end lag SLIs, but they may increase the amount of parallelism needed to handle peak traffic, and this can be tough to reason about.<p>(7) Planning only ex post facto for processing outages is always a pain. More than once I&#x27;ve heard teams say &quot;this outage would be a lot shorter if we had built in a way to process newly arrived messages first&quot;, and I&#x27;ve even seen folks jury-rig LIFO by e.g. changing the topic name for newly arrived messages and using the previous queue as a backlog only.<p>I wonder if my clusters have just been too small? The stuff here (&quot;how can we afford to operate this at scale?&quot;) is super interesting, just not the reliability stuff I&#x27;ve worried about day-to-day.
评论 #29399478 未加载