I've been using Kafka professionally for more than 10 years, since 0.8 where consumer groups didn't even exist yet. In my opinion this post exagerates a lot of things to promote their product.
We don't have giant clusters but we routinely do more than a million messages produced/s so it's not a completely trivial load.<p>Configuration complexity: there are a couple of things we had to tune over the years, mainly regarding the log cleaner once we started leveraging compacted topics, but other than that it's pretty much the default config. Is it the most optimal ? No but it's fast enough. Hardware choice in my opinion is not really an issue: we started on HDDs switching to SSDs later on, the cluster continued working just fine with the same configuration.<p>Scaling I'll grant can be a pain. We had to scale our clusters mainly for two reasons: 1) more services want to use Kafka therefore there are more topics and more data to serve. This is not that hard to scale: just add brokers to have more capacity. 2) is when you need more partitions for a topic; we had to do this a couple of times over the years and it's annoying because the default tooling to do data redistribution is bad. We ended up using a third party tool (today Cruise Control does this nicely).<p>Maintenance: yes, you need to monitor your stuff. Just like any other system you deploy on your own hardware. Thankfully monitoring Kafka is not _that_ hard, there are ready made solutions to export the JMX monitoring data. We use Prometheus (prometheus-jmx-exporter and node_exporter) almost since the beginning and it works fine. We're still using ZooKeeper but thankfully that's no longer necessary, I just have to say our zookeeper clusters have been rock solid over the years.<p>Development overheads: I really can't agree with that. Yes, the "main" ecosystem is Java based but it's not like librdkafka doesn't exist, and third party libraries are not all "sub par", that's just a mischaracterization. We use Go with sarama since 2014, recently switched to using franz-go: both work great. You do need to properly evaluate your options though (but that's part of your job).
With that said, if I were to start from scratch I would absolutely suggest starting with Kafka Streams, even if your team doesn't hava java experience (I mean learning Java isn't that hard), just because it makes building a data pipeline super straightforward and handle a lot of the complexities mentioned.