TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The problem with OpenTelemetry

168 点作者 robgering11 个月前

34 条评论

codereflection11 个月前
I understand what the author is saying, but vendor lock-in with closed-source observability platforms is a significant challenge, especially for large organizations. When you instrument hundreds or thousands of applications with a specific tool, like the Datadog Agent, disentangling from that tool becomes nearly impossible without a massive investment of engineering time. In the Platform Engineering professional services space, we see this problem frequently. Enterprises are growing tired of big observability platform lock-in, especially when it comes to Datadog&#x27;s opaque nature of your spend on their products, for example.<p>One of the promises of OTEL is that it allows organizations to replace vendor-specific agents with OTEL collectors, allowing the flexibility of the end observability platform. When used with an observability pipeline (such as EdgeDelta or Cribl), you can re-process collected telemetry data and send it to another platform, like Splunk, if needed. Consequently, switching from one observability platform to another becomes a bit less of a headache. Ironically, even Splunk recognizes this and has put substantial support behind the OTEL standard.<p>OTEL is far from perfect, and maybe some of these goals are a bit lofty, but I can say that many large organizations are adopting OTEL for these reasons.
评论 #40683832 未加载
评论 #40684209 未加载
doctorpangloss11 个月前
I don’t know what the Sentry guy is really saying - I mean you can write whatever code you want, go for it man.<p>But I do have to “pip uninstall sentry-sdk” in my Dockerfile because it clashes with something I didn’t author. And anyway, because it is completely open source, the flaws in OpenTelemetry for my particular use case took an hour to surmount, and vitally, I didn’t have to pay the brain damage cost most developers hate: relationships with yet another vendor.<p>That said I appreciate all the innovation in this space, from both Sentry and OpenTelemetry. The metrics will become the standard, and that’s great.<p>The problem with Not OpenTelemetry: eventually everyone is going to learn how to use Kubernetes, and the USP of many startup offerings will vanish. OpenTelemetry and its feature scope creep make perfect sense for people who know Kubernetes. Then it makes sense why you have a wire protocol, why abstraction for vendors is redundant or meaningless toil, and why PostHog and others stop supporting Kubernetes: it competes with their paid offering.
评论 #40683401 未加载
评论 #40682286 未加载
评论 #40682121 未加载
ankitnayan11 个月前
I think all of us agree that OpenTelemetry&#x27;s end-goal of making Observability vendor neutral is futuristic and inevitable. We can complain about it being hard to get started, bloated, etc but the value it provides is clear, esp, when you are paying $$$ to a vendor and stuck with it.<p>OpenStandards also open up a lot of usecases and startups too. SigNoz, TraceTest, TraceLoop, Signadot, all are very interesting projects which OpenTelemetry enabled.<p>The majority of the problem seems like sentry is not able to provide it&#x27;s sentry like features by adopting otel. Getting involved at the design phase could have helped shaped the project that could have considered your usecases. The maintainers have never been opposed to such contributions AFAIK.<p>Regarding, limiting otel just to tracing would not be sufficient today as the teams want a single platform for all observability rather than different tools for different signals.<p>I have seen hundreds of companies switch to opentelemetry and save costs by being able to choose the best vendor supporting their usecases.<p>lack of docs, learning curve, etc are just temporary things that can happen with any big project and should be fixed. Also, otel maintainers and teams have always been seeking help in improving docs, showcasing usecases, etc. If everyone cares enough for the bigger picture, the community and existing vendors should get more involved in improving things rather than just complaining.
评论 #40691307 未加载
no_circuit11 个月前
IMO this boils down how one gets paid to understand or misunderstand something. A telemetry provider&#x2F;founder is being commoditized by an open specification in which they do not participate in its development -- implied by the post saying the author doesn&#x27;t know anyone on the spec committee(s). No surprise here.<p>Of course implementing a spec from the provider point of view can be difficult. And also take a look at all the names of the OTEL community and notice that Sentry is not there: <a href="https:&#x2F;&#x2F;github.com&#x2F;open-telemetry&#x2F;community&#x2F;blob&#x2F;869410738168a7d8227f5ad4ecfd58d32c1d28e9&#x2F;community-members.md">https:&#x2F;&#x2F;github.com&#x2F;open-telemetry&#x2F;community&#x2F;blob&#x2F;86941073816...</a>. This really isn&#x27;t news. I&#x27;d guess that a Sentry customer should just be able to use the OTEL API and could just configure a proprietary Sentry exporter, for all their compute nodes, if Sentry has some superior way of collecting and managing telemetry.<p>IMO most library authors do not have to worry about annotation naming or anything like that mentioned in the post. Just use the OTEL API for logs, or use a logging API where there is an OTEL exporter, and whomever is integrating your code will take care of annotating spans. Propagating span IDs is the job of &quot;RPC&quot; libraries, not general code authors. Your URL fetch library should know how to propagate the Span ID provided that it also uses the OTEL API.<p>It is the same as using something like Docker containers on a serverless platform. You really don&#x27;t need to know that your code is actually being deployed in Kubernetes. Use the common Docker interface is what matters.
评论 #40683141 未加载
评论 #40683461 未加载
评论 #40683258 未加载
serverlessmom11 个月前
An argument that OpenTelemetry is somehow &#x27;too big&#x27; is an example of motivated reasoning. I can understand that A Guy Who Makes Money If You Use Sentry dislikes that people are using OTel libraries to solve similar problems.<p>Context propagation and distributed tracing are cool OTel features! But they are not the only thing OTel should be doing. OpenTelemetry instrumentation libraries can do a lot on their own, a friend of mine made massive savings in compute efficiency with the NodeJS OTel library: <a href="https:&#x2F;&#x2F;www.checklyhq.com&#x2F;blog&#x2F;coralogix-and-opentelemetry-on-checkly&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.checklyhq.com&#x2F;blog&#x2F;coralogix-and-opentelemetry-o...</a>
评论 #40683257 未加载
wdb11 个月前
Personally, I like OpenTelemetry, nice standardised approach. I just wished the vendors would have better support for the semantic conventions defined for a wide variety of traces.<p>I quite like the idea of only need to change one small piece of the code to switch otel exporters instead of swapping out a vendor trace sdk.<p>My main gripe with OpenTelemetry I don&#x27;t fully understand what the exact difference is between (trace) events and log records.
评论 #40682293 未加载
评论 #40688626 未加载
评论 #40682292 未加载
AndreasBackx11 个月前
I have been trying to find an equivalent for `tracing` first in Python and this week in TypeScript&#x2F;JavaScript. At my work I created an internal post called &quot;Better Python Logging? Tracing for Python?&quot; that basically asks this question. OpenTelemetry was also what I looked at and since I have looked at other tooling.<p>It is hard to explain how convenient `tracing` is in Rust and why I sorely miss it elsewhere. The simple part of adding context to logs can be solved in a myriad of ways, yet all boil down to a similar &quot;span-like&quot; approach. I&#x27;m very interested in helping bring what `tracing` offers to other programming communities.<p>It very likely is worth having some people from the space involved, possibly from the tracing crate itself.
评论 #40685084 未加载
wvh11 个月前
I have surveyed this landscape for a number of years, though I&#x27;m not involved enough to have strong opinions. We&#x27;re running a lot of Prometheus ecosystem and even some OpenTelemetry stacks across customers. OpenTelemetry does seem like one of these projects with an ever expanding scope. It makes it hard to integrate parts you like and keep things both computing-wise and mentally lightweight without having to go all-in.<p>It&#x27;s not anymore about hey, we&#x27;ll include this little library or protocol instead of rolling our own, so we can hope to be compatible with a bunch of other industry-standard software. It&#x27;s a large stack with an ever evolving spec. You have to develop your applications and infrastructure around it. It&#x27;s very seductive to roll your own simpler solution.<p>I appreciate it&#x27;s not easy to build industry-wide consensus across vendors, platforms and programming languages. But be careful with projects that fail to capture developer mindshare.
评论 #40696318 未加载
fractalwrench11 个月前
The main interest I&#x27;ve seen in OTel from Android engineers has been driven by concerns around vendor lock-in. Backend&#x2F;devops in their organisations are typically using OTel tooling already &amp; want to see all telemetry in one place.<p>From this perspective it doesn&#x27;t matter if the OTel SDK comes bundled with a bunch of unnecessary code or version conflicts as is suggested in the article. The whole point is to regain control over telemetry &amp; avoid paying $$$ to an ambivalent vendor.<p>FWIW, I don&#x27;t think the OTel implementation for mobile is perfect - a lot of the code was originally written with backend JVM apps in mind &amp; that can cause friction. However, I&#x27;m fairly optimistic those pain points will get fixed as more folks converge on this standard.<p>Disclaimer: I work at a Sentry competitor
markl4211 个月前
At the risk of hijacking the comments, I&#x27;ve been trying to use OTel recently to debug performance of a complex webpage with lots of async sibling spans, and finding it very very difficult to identify the critical path &#x2F; bottlenecks.<p>There&#x27;s no causal relationships between sibling spans. I think in theory &quot;span links&quot; solves this, but afaict this is not a widely used feature in SDKs are UI viewers.<p>(I wrote about this here <a href="https:&#x2F;&#x2F;github.com&#x2F;open-telemetry&#x2F;opentelemetry-specification&#x2F;issues&#x2F;4079">https:&#x2F;&#x2F;github.com&#x2F;open-telemetry&#x2F;opentelemetry-specificatio...</a>)
评论 #40682636 未加载
评论 #40682784 未加载
tnolet11 个月前
A recent example of OTel confusion.<p>I could for the life of me not get the Python integration send traces to a collector. Same URL, same setup same API key as for Nodejs and Go.<p>Turns out the Python SDK expect a URL encoded header, e.g. “Bearer%20somekey” whereas all other SDKs just accept a string with a whitespace.<p>The whole split between HTTP, protobuf over HTTP and GRPC is also massively confusing.
评论 #40682894 未加载
评论 #40682650 未加载
NeutralForest11 个月前
It resonates. As an intern I had to add OTEL to a Python project and I had to spend a lot of time in the docs to understand the concepts and implementation. Also, the Python impl has a lot of global state that makes it hard to use properly imo.
评论 #40683332 未加载
评论 #40681508 未加载
BiteCode_dev11 个月前
100% agree.<p>Every time I tried to use OT I was reading the doc and whispering &quot;but, why? I only need...&quot;.
评论 #40681799 未加载
spullara11 个月前
There is a huge whole in using spans as they are specified. Without separating the start of a span from the end of a span you can never see things that never complete, fail hard enough to not close the span, or travel through queues. This is a compromise they made because typical storage systems for tracing aren&#x27;t really good enough to stitch them all back together quickly. Everyone should be sending events and stitching it all together to create the view. But instead we get a least common denominator solution.
drewbug0111 个月前
As a contributor to (and consumer of) OpenTelemetry, I think critique and feedback is most welcome - and sorely needed.<p>But this ain’t it. In the opening paragraphs the author dismisses the hardest parts of the problem (presumably because they are <i>human</i> problems, which engineers tend to ignore), and betrays a complete lack of interest in understanding why things ended up this way. It also seems they’ve completely misunderstood the API&#x2F;SDK split in its entirety - because they argue for having such a split. It’s there - that’s exactly what exists!<p>And it goes on and on. I think it’s fair to critique OpenTelemetry; it can be really confusing. The blog post is evidence of that, certainly. But really it just reads like someone who got frustrated that they didn’t understand how something worked - and so instead of figuring it out, they’ve decided that it’s just hot garbage. I wish I could say this was unusual amongst engineers, but it isn’t.
评论 #40682164 未加载
评论 #40682114 未加载
评论 #40682039 未加载
shaqbert11 个月前
Otel is indeed quite complex. And the docs are not meant for quick wins...<p>Otelbin [0] has helped me quite a bit in configuring and making sense of it, and getting stuff done.<p>[0]: <a href="https:&#x2F;&#x2F;www.otelbin.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.otelbin.io&#x2F;</a>
评论 #40682068 未加载
epgui11 个月前
Anyone else finding this very difficult to read? I’d really recommend feeding this through a grammar checker, because poor grammar betrays unclear thinking.
评论 #40683275 未加载
grenbys11 个月前
I think there are two separate perspectives. For developers Open Telemetry is a clear win - high-quality vendor agnostic instrumentation backed by a reputable orgs. I instrumented with traces many business critical repos at my company (major customer support SaaS) with OTEL in Ruby, Python, JS. Not once was I confused&#x2F;blocked&#x2F;distracted by the presence of logs&#x2F;metrics in the spec. However, can’t say much from the observability vendor perspective trying to be fully compatible with OTEL spec including metrics&#x2F;logs. Article mentions customers having issues with using tracing instrumentation - it would’ve been great to back this up with corresponding github issues explaining the problems. Based on the presented JS snippet (just my guess) maybe the issue is with async code where the “span.operation” span gets immediately closed w&#x2F;o waiting for the doTheThing()? Yeah - that’s tricky in JS given its async primitives. We ended up just maintaining a global reference to the currently active span and patching some OTEL packages to respect that. FWIW Sentry JS instrumentation IS really good and practical. Would have been great if Sentry could donate&#x2F;contribute&#x2F;influence to OTEL JS SIG with specific improvements - would be a win-win. As much as I hate DataCanine pricing they did effectively donated their Ruby tracing instrumentation to OTEL which I think is one of the best ones out there.
hobofan11 个月前
This seems to be more of a branding problem than anything.<p>OP (rightfully) complains that there is a mismatch between what they (can) advertise (&quot;We support OTEL&quot;) and what they are actually providing to the user. I have the same pain point from the consumer side, where I have to trial multiple tools and service to figure out which of them actually supports the OTEL feature set I care about.<p>I feel like this could be solved by introducing better branding that has a clearly defined scope of features inside the project (like e.g. &quot;OTEL Tracing&quot;) which can serve as a direct signifier to customers about what feature set can be expected.
评论 #40684886 未加载
antonyt11 个月前
OTel is flawed for sure, but I don&#x27;t understand the stance against metrics and logs. Traces are inherently sampled unless you&#x27;re lighting all your money on fire, or operating at so small a scale that these decisions have no real impact. There are kinds of metrics and logs which you always want to emit because they&#x27;re mission-critical in some way. Is this a Sentry-specific thing? Does it just collapse these three kinds of information into a single thing called a &quot;trace&quot;?
评论 #40681694 未加载
评论 #40682042 未加载
评论 #40681676 未加载
dboreham11 个月前
I&#x27;ve used Otel quite a bit (in JVM systems) and honestly didn&#x27;t know it did more than tracing.<p>That said, I think this rot comes from the commercial side of the sector -- if you&#x27;re a successful startup with one product (e.g. graphing counters), then your investors are going to start beating you up about why don&#x27;t you expand into other adjacent product areas (e.g. tracing). Repeat previous sentence reversed. And so you get Grafana, New Relic, et al). OpenTelemetry is just mirroring that arrangement.
edenfed11 个月前
You can absolutely use just the OTel APIs and use something else besides the OTel SDK. Here is a blog post about how we did it with eBPF: <a href="https:&#x2F;&#x2F;odigos.io&#x2F;blog&#x2F;Integrating-manual-and-auto" rel="nofollow">https:&#x2F;&#x2F;odigos.io&#x2F;blog&#x2F;Integrating-manual-and-auto</a>
prymitive11 个月前
I only learned about OT after Prometheus announced some deeper integration with it. Reading OT docs about metrics feels like every little problem has a dedicated solution in the OT world, even if a more generalised one already covers it. Which is quite striking coming from the Prometheus world.
PeterZaitsev11 个月前
OpenTelemetry is interesting, On one side it is designed as the &quot;commodity feeder&quot; to number of proprietary backends as DataDog, on other hand we see good development of Open Source solutions as SigNoz and Coroot with good Otel support.
ris11 个月前
1. The main reason I want to use otel is so I can have one sidecar for my observability, not three, each with subtly different quirks and expectations. (also the associated collection&#x2F;aggregation infrastructure)<p>2. I honestly think the main reason otel appears so complex is the existing resources that attempt to explain the various concepts around it do a poor job and are very hand-wavey. You know the main thing that made otel &quot;click&quot; for me? Reading the protobuf specs. Literally nothing else explained succinctly the relationships between the different types of structure and what the possibilities with each were.
评论 #40696596 未加载
esafak11 个月前
This caught my eye:<p>&gt; Logs are just events - which is exactly what a span is, btw - and metrics are just abstractions out of those event properties. That is, you want to know the response time of an API endpoint? You don&#x27;t rewind 20 years and increment a counter, you instead aggregate the duration of the relevant span segment. Somehow though, Logs and Metrics are still front and center.<p>Is anyone replacing logs and metrics with traces?
评论 #40685667 未加载
评论 #40687514 未加载
dtjohnnymonkey11 个月前
&gt; That means what we actually want is a way to say “hey OpenTelemetry SDK, give us all the current spans in the buffer”.<p>Isn’t this exactly what the SpanExporter API is for? This is in the Go SDK, I suppose it may not be available in other SDKs.<p>I have used this API to convert OTel spans into log messages as we currently don’t have a distributed tracing vendor.
dan-allen11 个月前
I keep checking in on OpenTelemetry every few months to see if the bits we need are stable yet. There’s been very little progress on the things we’re waiting for.<p>I don’t follow closely enough to comment on possible causes.<p>What I do know is that the surface area of code and infrastructure that telemetry touches means adopting something unfinished is a big leap of faith.
评论 #40691578 未加载
评论 #40696613 未加载
cogman1011 个月前
Perhaps the real problem with OTel (IMO) is it&#x27;s trying to be everything for everyone and every language. It&#x27;s trying to have a common interface so that you can write OTel in Java or Javascript, python or rust, and you basically have the exact same API.<p>I suspect OP is seeing this directly when talking about the cludgyness of the Javascript API.
评论 #40688709 未加载
zellyn11 个月前
Are they basically just saying that the OpenTelemetry client APIs should be split from the rest of the pieces of the project, and versioned super conservatively?<p>The simple API they describe is basically there in OTel. The API is larger, because it also does quite a few other things (personally, I think (W3C) Baggage is important too), but as a library author I should need only the client APIs to write to.<p>When implementing, you&#x27;re free to plug in Providers that use OpenAPI-provided plumbing, but you can equally well plug in Providers from DataDog or Sentry or whatever.<p>Unless I&#x27;m missing something, any further complaints could be solved by making sure the Client APIs (almost) never have backward-incompatible changes, and are versioned separately.
评论 #40683890 未加载
EdSchouten11 个月前
&gt; Its not a hard problem, [...]. At its core its structured events that carry two GUIDs along with them: a trace ID and a parent event ID. It is just building a tree.<p>I&#x27;ve always wondered, what&#x27;s the point of the trace ID? What even is a trace?<p>- It could be a single database query that&#x27;s invoked on a distributed database, giving you information about everything that went on inside the cluster processing that query.<p>- Or it could be all database calls made by a single page request on a web server.<p>- Or it could be a collection of page requests made by a single user as part of a shopping checkout process. Each page request could make many outgoing database calls.<p>Which of these three you should choose merely depends on what you want to visualize at a given point in time. My hope is that at some point we get a standard for tracing that does away with the notion of trace IDs. Just treat everything going on in the universe as a graph of inter-connected events.
评论 #40685320 未加载
noname12011 个月前
tl;dr OpenTelemetry eats Sentry&#x27;s cake by commoditizing what they do and the reaction of the founder of Sentry is to be very upset about it rather than innovating.
jiveturkey11 个月前
&gt; Everyone and their mother is running a shoddy microservice-coupled stack,<p>buried the lede!
syngrog6611 个月前
Up my alley. I&#x27;m the author of a FOSS Golang span instrumentation library for latency (LatLearn in my GitHub.) And part of the team that back in 2006&#x2F;2007 made an in-house distributed tracing solution for Orbitz.