Not much I agree with in this article. Seems to be based on little operational experience with the product, particular indicated by a couple of major mistakes and assumptions in the article (compacting does happen, didn't read the manual about deployment configurations clearly).<p>Loki has its idiosyncrasies but they are there for a good reason. Anyone who has sat there waiting hours for a Kibana or Splunk query to run to get some information out will know what I'd referring to. You don't dragnet your entire log stream unless your logs are terrible, which needs to be fixed, or you don't know when something happened, which needs fixing. I watch many people run queries that scan terabytes of data with gay abandon on a regular basis on older platforms and still never get what they need out.<p>The structured metadata distinction is important because when you do a query against that you are not using an index, just parsed out data. That means explicitly you're not filtering, you're scanning and that is expensive.<p>If you have a problem with finding things, then it's not the logging engine, it's the logs!
Did someone use both Grafana Loki and Kibana? Does it have any advantages over Kibana? I am mostly interested in resource usage and versatility of filtering.<p>In Kibana, if something is there I will find it with ease and it doesn't take a lot of time to investigate issues in a microservice based application. It is also quite fast.
@valyala , as others have noted, you are CEO of VictoriaMetrics and have written (most of?) VictoriaLogs. How is VictoriaLogs coming along? This is an older blog post.
It's also not ideal to have a different query language for different Grafana datastores (LogQL, PromQL, TraceQL). Are there any plans on making a unified Grafana query language?