TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Is it time to version observability?

84 pointsby RyeCombinator10 months ago

14 comments

Veserv10 months ago
They do not appear to understand the fundamental difference between logs, traces, and metrics. Sure, if you can log every event you want to record, then everything is just events (I will ignore the fact that they are still stuck on formatted text strings as a event format). The difference is what do you do when you can not record everything you want to either at build time or runtime.<p>Logs are independent. When you can not store every event, you can drop them randomly. You lose a perfect view of every logged event, but you still retain a statistical view. As we have already assumed you <i>can not</i> log everything, this is the best you can do anyways.<p>Traces are for correlated events where you want every correlated event (a trace) or none of them (or possibly the first N in a trace). Losing events within a trace makes the entire trace (or at least the latter portions) useless. When you can not store every event, you want to drop randomly at the whole trace level.<p>Metrics are for situations where you know you can not log everything. You aggregate your data at log time, so instead of getting a statistically random sample you instead get aggregates that incorporate all of your data at the cost of precision.<p>Note that for the purposes of this post, I have ignored the reason why you can not store every event. That is an orthogonal discussion and techniques that relieve that bottleneck allow more opportunities to stay on the happy path of &quot;just events with post-processed analysis&quot; that the author is advocating for.
评论 #41205808 未加载
评论 #41204939 未加载
评论 #41205208 未加载
评论 #41206335 未加载
评论 #41211153 未加载
评论 #41205721 未加载
archenemybuntu10 months ago
Id gonna break a nerve and say most orgs overengineer observability. There&#x27;s the whole topology of otel tools, Prometheus tools and bunch of Long term storage &#x2F; querying solutions. Very complicated tracing setups. All these are fine if you have a team for maintaining observability only. But your avg product development org can sacrifice most of it and do with proper logging with a request context, plus some important service level metrics + grafana + alarms.<p>Problem with all these above tools is that, they all seem like essential features to have but once you have the whole topology of 50 half baked CNCF containers set up in &quot;production&quot; shit starts to break in very mysterious ways and also these observability products tend to cost a lot.
评论 #41204649 未加载
评论 #41204839 未加载
评论 #41208155 未加载
评论 #41207135 未加载
rbetts10 months ago
I feel like the focus on trace&#x2F;log&#x2F;metrics terminology is overshadowing Charity&#x27;s comments on the presentation and navigation tier, which is really where the focus should be in my experience. Her point about making the curious more effective than the tenured is quite powerful.<p>Observability databases are quickly adopting columnar database technologies. This is well aligned with wide, sparse columns suitable to wide, structured logs. These systems map well to the query workloads, support the high speed ingest rate, can tolerate some about of buffering on the ingest path for efficiency, and store a ton of data highly compressed, and now readily tier local to cloud storage. Consolidating more of the fact table to this format makes a lot of sense - a lot more sense than running two or three separate database technologies specialized to metrics, logs, and traces. You can now end the cardinality miseries of legacy observability TSDBs.<p>But the magic sauce in observability platforms is making the rows in the fact table linkable and navigable - getting from a log message to a relevant trace; navigating from an error message in a span to a count of those errors filtered by region or deployment id... This is the complexity in building highly ergonomic observability platforms - all of the transformation, enrichment, and metadata management (and the UX to make it usable).
评论 #41222282 未加载
viraptor10 months ago
This is quite frustrating to read. The whole set of assumed behaviours is wrong. I&#x27;m happy doing exactly what&#x27;s described on 2.0 processes while using datadog.<p>Charity&#x27;s talk about costs is annoying too. Honeycomb is the most expensive solution I&#x27;ve seen so far. Until they put a &quot;we&#x27;ll match your logging+metrics contact cost for same volume and features&quot; guarantee on the pricing page, it&#x27;s just empty talk.<p>Don&#x27;t get me wrong, I love the Honeycomb service and what they&#x27;re doing. I would love to use it. But this is just telling me &quot;you&#x27;re doing things wrong, you should do (things I&#x27;m already doing) using our system and save money (even though pricing page disagrees)&quot;.
评论 #41222253 未加载
flockonus10 months ago
&gt; Y’all, Datadog and Prometheus are the last, best metrics-backed tools that will ever be built. You can’t catch up to them or beat them at that; no one can. Do something different. Build for the next generation of software problems, not the last generation.<p>Heard a very similar thing from Plenty Of Fish creator in 2012, I unfortunately believed him; &quot;the dating space was solved&quot;. Turns out it never was, and like every space, solutions will keep on changing.
评论 #41205501 未加载
评论 #41206548 未加载
评论 #41205171 未加载
评论 #41204883 未加载
zellyn10 months ago
A few questions:<p>a) You&#x27;re dismissing OTel, but if you _do_ want to do flame graphs, you need traces and spans, and standards (W3C Trace-Context, etc.) to propagate them.<p>b) What&#x27;s the difference between an &quot;Event&quot; and a &quot;Wide Log with Trace&#x2F;Span attached&quot;? Is it that you don&#x27;t have to think of it only in the context of traces?<p>c) Periodically emitting wide events for metrics, once you had more than a few, would almost inevitably result in creating a common API for doing it, which would end up looking almost just like OTel metrics, no?<p>d) If you&#x27;re clever, metrics histogram sketches can be combined usefully, unlike adding averages<p>e) Aren&#x27;t you just talking about storing a hell of a lot of data? Sure, it&#x27;s easy not to worry, and just throw anything into the Wide Log, as long as you don&#x27;t have to care about the storage. But that&#x27;s exactly that happens with every logging system I&#x27;ve used. Is sampling the answer? Like, you still have to send all the data, even from <i>very</i> high QPS systems, so you can tail-sample later after the 24 microservice graph calls all complete?<p>Don&#x27;t get me wrong, my years-long inability to adequately and clearly settle the simple theoretical question of &quot;What&#x27;s the difference between a normal old-school log, and a log attached to a trace&#x2F;span, and which should I prefer?&quot; has me biased towards your argument :-)
评论 #41222229 未加载
datadrivenangel10 months ago
So the core idea is to move to arbitrarily wide logs?<p>Seems good in theory, except in practice it just defers the pain to later, like schema on read document databases.
firesteelrain10 months ago
It took me a bit to really understand the versioning angle and I think I understand.<p>The blog discusses the idea of evolving observability practices, suggesting a move from traditional methods (metrics, logs, traces) to a new approach where structured log events serve as a central, unified source of truth. The argument is that this shift represents a significant enough change to be considered a new version of observability, similar to how software is versioned when it undergoes major updates. This evolution would enable more precise and insightful software development and operations.<p>Unlike separate metrics, logs, and traces, structured log events combine these data types into a single, comprehensive source, simplifying analysis and troubleshooting.<p>Structured events capture more detailed context, making it easier to understand the &quot;why&quot; behind system behavior, not just the &quot;what.&quot;
评论 #41222221 未加载
tunesmith10 months ago
Did I miss an elephant in the room?<p>Wide structured logging to log EVERYTHING? Isn&#x27;t that just massively huge? I don&#x27;t see how that would be cheaper.<p>Related Steven Wright joke: “I have a map of the United States... Actual size. It says, &#x27;Scale: 1 mile = 1 mile.&#x27; I spent last summer folding it. I hardly ever unroll it. People ask me where I live, and I say, &#x27;E6.”
评论 #41206249 未加载
xyzzy_plugh10 months ago
I was excited by the title and thought that this was going to be about versioning the observability contracts of services, dashboards, alerts, etc., which are typically exceptionally brittle. Boy am I disappointed.<p>I get what Charity is shouting. And Honeycomb is incredible. But I think this framing overly simplifies things.<p>Let&#x27;s step back and imagine everything emitted JSON only. No other form of telemetry is allowed. This is functionally equivalent to wide events albeit inherently flawed and problematic as I&#x27;ll demonstrate.<p>Every time something happens somewhere you emit an Event object. You slurp these to a central place, and now you can count them, connect them as a graph, index and search, compress, transpose, etc. etc.<p>I agree, this works! Let&#x27;s assume we build it and all the necessary query and aggregation tools, storage, dashboards, whatever. Hurray! But sooner or later you will have this problem: a developer comes to you and says &quot;my service is falling over&quot; and you&#x27;ll look and see that for every 1 MiB of traffic it receives, it also sends roughly 1 MiB of traffic, but it produces 10 MiB of JSON Event objects. Possibly more. Look, this is a very complex service, or so they tell you.<p>You smile and tell them &quot;not a problem! We&#x27;ll simply pre-aggregate some of these events in the service and emit a periodic summary.&quot; Done and done.<p>Then you find out there&#x27;s a certain request that causes problems, so you add more Events, but this also causes an unacceptable amount of Event traffic. Not to worry, we can add a special flag to only emit extra logs for certain requests, or we&#x27;ll randomly add extra logging ~5% of the time. That should do it.<p>Great! It all works. That&#x27;s the end of this story, but the result is that you&#x27;ve re-invented metrics and traces. Sure, logs -- or &quot;wide events&quot; that are for the sake of this example the same thing -- work well enough for almost everything, except of course for all the places they don&#x27;t. And now where they don&#x27;t, you have to reinvent all this <i>stuff</i>.<p>Metrics and traces solve these problems upfront in a way that&#x27;s designed to accommodate scaling problems before you suffer an outage, without necessarily making your life significantly harder along the way. At least that&#x27;s the intention, regardless of whether or not that&#x27;s true in practice -- certainly not addressed by TFA.<p>What&#x27;s more is that in practice metrics and traces <i>today</i> are in fact <i>wide events</i>. They&#x27;re <i>metrics</i> events, or <i>tracing</i> events. It doesn&#x27;t really matter if a metric ends up scraped by a Prometheus metrics page or emitted as a JSON log line. That&#x27;s besides the point. The point is they are fit for purpose.<p>Observability 2.0 doesn&#x27;t fix this, it just shifts the problem around. Remind me, how did we do things <i>before</i> Observability 1.0? Because as far as I can tell it&#x27;s strikingly similar in appearance to Observability 2.0.<p>So forgive me if my interpretation of all of this is lipstick on the pig that is Observability 0.1<p>And finally, I <i>get</i> you <i>can</i> make it work. Google certainly gets that. But then they built Monarch anyways. Why? It&#x27;s worth understanding if you ask me. Perhaps we should start by educating the general audience on this matter, but then I&#x27;m guessing that would perhaps not aid in the sale of a solution that eschews those very learnings.
评论 #41206612 未加载
FridgeSeal10 months ago
&gt; My other hope is that people will stop building new observability startups built on metrics.<p>I mean, can you blame them?<p>Metrics alone are: valuable and useful, prom text format and remote write protocol is widely used, straightforward to implement and a much, much, much smaller slice than “the entirety of the OpenTelemetry spec”. Have you read those documents? Massive, sprawling, terminology for days, it’s confusingly written in places IMO. I know it’s trying to cover a lot of bases all at once (logs, traces AND metrics) and design accordingly to handle all of them properly, so it’s probably fine to deal with if you have large enough team, but that’s not everyone.<p>To say nothing of the full adoption of opentelemetry data. Prometheus is far from my favourite bit of tech, but setting up scraping and a grafana dashboard is way less shenanigans than setting up open telemetry collection, and validating it’s all correct and present in my experience.<p>If someone prefers to tackle a slice like metrics only and do it better than the whole hog, more power to them IMO.
moomin10 months ago
We came up with a buzzword to market our product. The industry made this buzzword meaningless. Now we’re coming up with a new one. We’re sure the same thing won’t happen again.
jrockway10 months ago
I like the wide log model. At work, we write software that customers run for themselves. When it breaks, we can&#x27;t exactly ssh in and mutate stuff until it works again, so we need some sort of information that they can upload to us. Logs are the easiest way to do that, and because logs are a key part of our product (batch job runner for k8s), we already have infrastructure to store and retrieve logs. (What&#x27;s built into k8s is sadly inadequate. The logs die when the pod dies.)<p>Anyway, from this we can get metrics and traces. For traces, we log the start and end of requests, and generate a unique ID at the start. Server logging contexts have the request&#x27;s ID. Everything that happens for that request gets logged along with the request ID, so you can watch the request transit the system with &quot;rg 453ca13b-aa96-4204-91df-316923f5f9ae&quot; or whatever on an unpacked debug dump, which is rather efficient at moderate scale. For metrics, we just log stats when we know them; if we have some io.Writer that we&#x27;re writing to, it can log &quot;just wrote 1234 bytes&quot;, and then you can post-process that into useful statistics at whatever level of granularity you want (&quot;how fast is the system as a whole sending data on the network?&quot;, &quot;how fast is node X sending data on the network?&quot;, &quot;how fast is request 453ca13b-aa96-4204-91df-316923f5f9ae sending data to the network?&quot;). This doesn&#x27;t scale quite as well, as a busy system with small writes is going to write a lot of logs. Our metrics package has per-context.Context aggregation, which cleans this up without requiring any locking across requests like Prometheus does. <a href="https:&#x2F;&#x2F;github.com&#x2F;pachyderm&#x2F;pachyderm&#x2F;blob&#x2F;master&#x2F;src&#x2F;internal&#x2F;meters&#x2F;meters.go">https:&#x2F;&#x2F;github.com&#x2F;pachyderm&#x2F;pachyderm&#x2F;blob&#x2F;master&#x2F;src&#x2F;inter...</a><p>Finally, when I get tired of having 43 terminal windows open with a bunch of &quot;less&quot; sessions over the logs, I hacked something together to do a light JSON parse on each line and send the logs to Postgres: <a href="https:&#x2F;&#x2F;github.com&#x2F;pachyderm&#x2F;pachyderm&#x2F;blob&#x2F;master&#x2F;src&#x2F;internal&#x2F;cmd&#x2F;load-debug-dump-into-postgres&#x2F;main.go">https:&#x2F;&#x2F;github.com&#x2F;pachyderm&#x2F;pachyderm&#x2F;blob&#x2F;master&#x2F;src&#x2F;inter...</a>. It is slow to load a big dump, but the queries are surprisingly fast. My favorite thing to do is the &quot;select * from logs where json-&gt;&#x27;x-request-id&#x27; = &#x27;453ca13b-aa96-4204-91df-316923f5f9ae&#x27; order by time asc&quot; or whatever. Then I don&#x27;t have 5 different log files open to watch a single request, it&#x27;s just all there in my psql window.<p>As many people will say, this analysis method doesn&#x27;t scale in the same way as something like Jaeger (which scales by deleting 99% of your data) or Prometheus (which scales by throwing away per-request information), but it does let you drill down as deep as necessary, which is important when you have one customer that had one bad request and you absolutely positively have to fix it.<p>My TL;DR is that if you&#x27;re a 3 person team writing some software from scratch this afternoon, &quot;print&quot; is a pretty good observability stack. You can add complexity later. Just capture what you need to debug today, and this will last you a very long time. (I wrote the monitoring system for Google Fiber CPE devices... they just sent us their logs every minute and we did some very simple analysis to feed an alerting system; for everything else, a quick MapReduce or dremel invocation over the raw log lines was more than adequate for anything we needed to figure out.)
amelius10 months ago
I can&#x27;t even run valgrind on many libraries and Python modules because they weren&#x27;t designed with valgrind in mind. Let&#x27;s work on observability before we version it.