begging people to recognize that a person who sells a solution is going to view these problems through the lens of being rewarded for applying their solution to your problem, even if it's not appropriate.<p>> Yet, per my own experience it’s still extremely hard to explain what does Charity meant by “logs are thrash”, let alone the fact that logs and traces are essentially the same things. Why is everyone so confused?<p>Charity is not confused, Charity is <i>incentivized</i>. What she means by "logs are trash" is "I do not sell a logging product". (and, to be clear, I'm only naming Charity individually here because that's who the author named in their article.)<p>> When I was working at Meta, I wasn’t aware that I was privileged to be using the best observability system ever.<p>The observability system that is appropriate for Meta is not necessarily appropriate for your project. Those tools are cool but also require a pretty serious investment to build and operate correctly. It's <i>very</i> easy to wade into a cardinality explosion problem when tagging and indexing everything you can imagine, it's <i>very</i> easy to wade into problems regarding mixed retention policies when some events are important and others are less-important, it's <i>very</i> easy to wade into a latency-sensitivity issue if you're building a log/event collection infra that you don't allow to ever lose data, etc. As it turns out, observability is a large topic.<p>The idea that there's one "best" way to do observability is a little ridiculous. Like when I worked at Etsy some of the data was literally money, when I worked at Jackbox Games we made fart joke games (Quiplash, Drawful, Fibbage, You Don't Know Jack, etc) and the infrastructure was nothing but pure cost. The observability needs of those two orgs were <i>phenomenally</i> different, because the products were different, the revenue models were different, the needs of the users were different, etc.<p>Also this notion that "all you need is wide events" is the answer seems ... really shallow. A data point is an unordered set of key-value pairs? That's how ... a LOT of logging, metrics, and tracing infra expresses things at the level of an individual record/event. The difference is in the relationships between the keys and values, the relationships between the individual records, etc.<p>and "stop sampling" is just a bizarre marketing angle. If you have 1 million records or 10 million records and you get the same squiggly line out of analyzing it, congrats you have inflated the size of the data that nobody ever looks at. There is only one person who this benefits and it's the person who charges you for the pipeline, which is <i>exactly</i> why people who sell a pipeline are <i>incentivized</i> to tell you that sampling is bad: if you are sampling, you are sending and storing and querying fewer data points, so they are charging you less money. They are getting <i>paid</i> to tell you that sampling is bad. Sampling is not good or bad, sampling is sampling. The reality is that in a lot of these systems, the vast majority of the information will never, ever be looked at or used. Whether or not that matters is <i>entirely</i> context dependent.