This table schema: <a href="https://github.com/timescale/tsbs/blob/bcc00137d72d889e6059e247fcc343a5bc45991d/pkg/targets/clickhouse/creator.go#L146=" rel="nofollow">https://github.com/timescale/tsbs/blob/bcc00137d72d889e6059e...</a><p>...seems like a quite odd way to store time-series in ClickHouse. If I understood that code correctly (and I am really not sure), they partition their data by some tag value (the first one in a list?), and sort each partition by "tags ID", while timescaledb partitions by time afaik.<p>Of course there will be large discrepancies if data is sorted one way in one database schema, and another way in another schema. It seems that at least their query of "ORDER BY time LIMIT 10" would greatly benefit from partitioning or sorting the table by time.<p>Whether that makes sense depends on your usecase. But I don't think a benchmark with completely dfferent schemas, partitioning and primary keys across databases is fair.<p>Another thing I noticed is that their version of ClickHouse is quite old, at least aroudn the time the test was written. The shown CREATE TABLE syntax is deprecated since a few versions and cannot be found in recent docs, only github: <a href="https://github.com/ClickHouse/ClickHouse/blob/v18.16/docs/en/operations/table_engines/mergetree.md" rel="nofollow">https://github.com/ClickHouse/ClickHouse/blob/v18.16/docs/en...</a>
Really disappointing post from QuestDB. I would have expected them to do some research on how to design CH table schema before doing such kind of benchmark.
The queries used does not take into account the primary key/order by fields. Based on query to be optimized, once could use Projections or MV. Perhaps a bit more work is needed, but that's the clickhouse way of doing it.
Last year we released QuestDB 6.0 and achieved an ingestion rate of 1.4 million rows per second (per server). We compared those results to popular open source databases [1] and explained how we dealt with out of order ingestion under the hood while keeping the underlying storage model read-friendly. Since then, we focused our efforts on making queries faster, in particular filter queries with WHERE clauses. To do so, we once again decided to make things from scratch and built a JIT (Just-in-Time) compiler for SQL filters, with tons of low-level optimisations such as SIMD. We then parallelized the query execution to improve the execution time even further. In this blog post, we first look at some benchmarks against Clickhouse and TimescaleDB, before digging deeper in how this all works within QuestDB's storage model. Once again, we use the Time Series Benchmark Suite (TSBS) [2], developed by TimescaleDB,: it is an open source and reproducible benchmark.<p>We'd love to get your feedback!<p>[1]:<a href="https://news.ycombinator.com/item?id=27411307" rel="nofollow">https://news.ycombinator.com/item?id=27411307</a><p>[2]:<a href="https://github.com/timescale/tsbs" rel="nofollow">https://github.com/timescale/tsbs</a>
This looks cool, I've been looking at time series DBs lately and mostly landed with timescale because of the ability to have complete freedom querying the dataset with postgres kitchen sink.<p>The post here really focuses on one query and that is weirdly without a time sort.
Would similar queries be also fast? - What about a join, aggregates, lag()-over, subqueries, unions, etc queries
It is definitely useful to be able to consume a lot of data quickly, especially high-cardinality data. Inevitably, an infinite flood of data will eventually consume any finite space limitations. I'm wondering what QuestDB's story for data aggregation and cleanup looks like?