Hi HN, I’m one of the builders of Quickwit, a cloud-native OSS search engine for observability. As of 2023, we support logs and traces, metrics will come in 2024.<p>You know the pitch: while software like Datadog or Splunk are great, they often comes with hefty price tags. Our mission is to offer an affordable alternative. So we’ve built Quickwit, we’ve made it compatible with the observabilty ecosystem (OpenTelemetry, Jaeger, Grafana) and above all, we’ve made it cost-efficient / “easy” to scale (well it’s never easy to scale to petabytes..).<p>To give you a glance at the engine performance I made a benchmark on the GitHub Archive dataset, 23 TB of events, here are the main observations:<p>Indexing: costs $2 per ingested TB. With 4CPU, throughput is at 20MBs However, you'll observe > 30MB throughput on simpler datasets, like logs and traces.<p>Search: a typical query costs $0.0002 per TB (considering both CPU time and GET request costs). Using 8CPU, a simple query on 23TB is achieved in under a second.<p>Storage: on S3, it costs $8 per ingested TB per month on the GitHub Archive dataset. With logs and traces, you might see costs around $5/ingested TB due to a 2x better compression ratio.<p>I'm eager to get your thoughts on this!<p>Benchmark: <a href="https://quickwit.io/blog/benchmarking-quickwit-engine-on-an-adversarial-dataset" rel="nofollow noreferrer">https://quickwit.io/blog/benchmarking-quickwit-engine-on-an-...</a><p>Github repo: <a href="https://github.com/quickwit-oss/quickwit/">https://github.com/quickwit-oss/quickwit/</a><p>Website: <a href="https://quickwit.io/" rel="nofollow noreferrer">https://quickwit.io/</a>