TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Query 1.6B rows in milliseconds, live

271 pointsby bluestreakalmost 5 years ago

25 comments

bluestreakalmost 5 years ago
Author here.<p>A few weeks ago, we wrote about how we implemented SIMD instructions to aggregate a billion rows in milliseconds [1] thanks in great part to Agner Fog’s VCL library [2]. Although the initial scope was limited to table-wide aggregates into a unique scalar value, this was a first step towards very promising results on more complex aggregations. With the latest release of QuestDB, we are extending this level of performance to key-based aggregations.<p>To do this, we implemented Google’s fast hash table aka “Swisstable” [3] which can be found in the Abseil library [4]. In all modesty, we also found room to slightly accelerate it for our use case. Our version of Swisstable is dubbed “rosti”, after the traditional Swiss dish [5]. There were also a number of improvements thanks to techniques suggested by the community such as prefetch (which interestingly turned out to have no effect in the map code itself) [6]. Besides C++, we used our very own queue system written in Java to parallelise the execution [7].<p>The results are remarkable: millisecond latency on keyed aggregations that span over billions of rows.<p>We thought it could be a good occasion to show our progress by making this latest release available to try online with a pre-loaded dataset. It runs on an AWS instance using 23 threads. The data is stored on disk and includes a 1.6billion row NYC taxi dataset, 10 years of weather data with around 30-minute resolution and weekly gas prices over the last decade. The instance is located in London, so folks outside of Europe may experience different network latencies. The server-side time is reported as “Execute”.<p>We provide sample queries to get started, but you are encouraged to modify them. However, please be aware that not every type of query is fast yet. Some are still running under an old single-threaded model. If you find one of these, you’ll know: it will take minutes instead of milliseconds. But bear with us, this is just a matter of time before we make these instantaneous as well. Next in our crosshairs is time-bucket aggregations using the SAMPLE BY clause.<p>If you are interested in checking out how we did this, our code is available open-source [8]. We look forward to receiving your feedback on our work so far. Even better, we would love to hear more ideas to further improve performance. Even after decades in high performance computing, we are still learning something new every day.<p>[1] <a href="https:&#x2F;&#x2F;questdb.io&#x2F;blog&#x2F;2020&#x2F;04&#x2F;02&#x2F;using-simd-to-aggregate-billions-of-rows-per-second" rel="nofollow">https:&#x2F;&#x2F;questdb.io&#x2F;blog&#x2F;2020&#x2F;04&#x2F;02&#x2F;using-simd-to-aggregate-b...</a><p>[2] <a href="https:&#x2F;&#x2F;www.agner.org&#x2F;optimize&#x2F;vectorclass.pdf" rel="nofollow">https:&#x2F;&#x2F;www.agner.org&#x2F;optimize&#x2F;vectorclass.pdf</a><p>[3] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ncHmEUmJZf4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ncHmEUmJZf4</a><p>[4] <a href="https:&#x2F;&#x2F;github.com&#x2F;abseil&#x2F;abseil-cpp" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;abseil&#x2F;abseil-cpp</a><p>[5] <a href="https:&#x2F;&#x2F;github.com&#x2F;questdb&#x2F;questdb&#x2F;blob&#x2F;master&#x2F;core&#x2F;src&#x2F;main&#x2F;c&#x2F;share&#x2F;rosti.h" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;questdb&#x2F;questdb&#x2F;blob&#x2F;master&#x2F;core&#x2F;src&#x2F;main...</a><p>[6] <a href="https:&#x2F;&#x2F;github.com&#x2F;questdb&#x2F;questdb&#x2F;blob&#x2F;master&#x2F;core&#x2F;src&#x2F;main&#x2F;c&#x2F;share&#x2F;vec_agg.cpp#L155" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;questdb&#x2F;questdb&#x2F;blob&#x2F;master&#x2F;core&#x2F;src&#x2F;main...</a><p>[7] <a href="https:&#x2F;&#x2F;questdb.io&#x2F;blog&#x2F;2020&#x2F;03&#x2F;15&#x2F;interthread" rel="nofollow">https:&#x2F;&#x2F;questdb.io&#x2F;blog&#x2F;2020&#x2F;03&#x2F;15&#x2F;interthread</a><p>[8] <a href="https:&#x2F;&#x2F;github.com&#x2F;questdb&#x2F;questdb" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;questdb&#x2F;questdb</a>
评论 #23617196 未加载
评论 #23625404 未加载
评论 #23625770 未加载
评论 #23628369 未加载
sa46almost 5 years ago
How timely! I&#x27;ve done a deep dive into column store databases for the past couple of weeks. Reading through the Quest docs, I&#x27;d give it the following characteristics. Are these accurate?<p>- single node database, not [yet] distributed<p>- primary focus is time-series data, specifically in-order time series data (the `designated timestamp` extension)<p>- physical data layout is an append-only column store<p>- Implements a small subset of SQL with some affordances for time series (LATEST BY, SAMPLE BY).<p>- Doesn&#x27;t support explicit GROUP BY or HAVING clauses. Instead, questdb implicitly assumes GROUP BY or HAVING based on presence of aggregation functions in the select clause.<p>- Small standard library of functions: only 4 text functions.<p>Based on these characteristics it seems the quest db is well positioned against Influx. It&#x27;s probably faster than Timescale DB but significantly less flexible given that Timescale has all of Postgres behind it. Quest DB might eventually compete with clickhouse but it&#x27;s long ways out given that it&#x27;s not distributed and implements a much smaller subset of SQL.<p>I&#x27;d love to get any insight into join performance. Quite a few column stores handle large joins poorly (clickhouse, druid).
评论 #23620005 未加载
评论 #23629821 未加载
评论 #23619690 未加载
评论 #23621954 未加载
dvnguyenalmost 5 years ago
Ask HN: what’s the market for a new Database? Who do you build it for and how do you sell it? Asking as an uninformed users who haven’t needed a niche database.
评论 #23623243 未加载
评论 #23621152 未加载
评论 #23622249 未加载
twoodfinalmost 5 years ago
Very cool. Major props for putting this out there and accepting arbitrary queries.<p>Couple comments &#x2F; questions:<p>- Correct that there’s no GROUP BY support?<p>- EXPLAIN or similar would be nice, both to get a peek at how your engine works &amp; to anticipate whether a query is using parallel &#x2F; SIMD execution and will take milliseconds vs. minutes.
评论 #23620673 未加载
评论 #23620773 未加载
calibasalmost 5 years ago
I abused LEFT JOIN to create a query that produces 224,964,999,650,251 rows. Only 3.68ms execution time, now that&#x27;s impressive!
评论 #23643664 未加载
评论 #23626333 未加载
gregwebsalmost 5 years ago
This seems very similar to Victoria Metrics. Victoria Metrics is very much based on the design of Clickhouse and currently shows best of class performance numbers for time series data: it would be a lot more interesting to see a comparison to Victoria Metrics than ClickHouse (which is not fully optimized for time series). Victoria Metrics is Prometheus compatible whereas Quest now supports Postgres compatibility. Both have compatibility with InfluxDB.
dnadleralmost 5 years ago
Im not much of a database expert by an stretch, but this query took about 80 seconds, which seems like quite a long time, but maybe its more complicated than it appears? My understanding is that the group by is handled automatically, and the results seem to support that:<p>select cab_type, payment_type, count() from trips;
评论 #23628092 未加载
luxalmost 5 years ago
I was looking for dev&#x2F;production specs for QuestDB the other day and didn&#x27;t see them in the docs. Being that it&#x27;s in Java, which can be quite memory-hungry, what&#x27;s the minimum and recommended RAM&#x2F;CPU required to run it?
评论 #23618825 未加载
gubbyalmost 5 years ago
FYI, this query took 81 seconds:<p>select count(*) from trips where fare_amount &gt; 0<p>Presumably this is Hacker News load?
评论 #23620909 未加载
评论 #23620905 未加载
pachicoalmost 5 years ago
It looks very proming, congrats. I use ClickHouse in production and I&#x27;d love to see how this project evolves. My main disappointment is the amount of aggregation functions: <a href="https:&#x2F;&#x2F;questdb.io&#x2F;docs&#x2F;functionsAggregation" rel="nofollow">https:&#x2F;&#x2F;questdb.io&#x2F;docs&#x2F;functionsAggregation</a> Clickhouse provides hundreds of functions, many of which I use. It would be hard to even consider QuestDB with this amount of functions. I&#x27;ll stay tuned, anyway. Keep up the good work!
评论 #23642864 未加载
lykr0nalmost 5 years ago
`select distinct pickup_latitude, count(*) from trips` takes 241.37s.<p>But some of the more top level queries are quite fast.
评论 #23619465 未加载
Bedon292almost 5 years ago
Any plans for geo support? Looks like you have lat&#x2F;lon stored as two independent doubles, which does not lend itself well to any sort of geo operations.
评论 #23620326 未加载
评论 #23620286 未加载
keshavmralmost 5 years ago
SELECT vendor_id, cab_type, avg(fare_amount) from trips;<p>This takes ~86 seconds. Ran it multiple times.<p>SIMD is one of the ingredients for better query performance, but NOT THE ONLY ONE. See this for more info: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xJd8M-fbMI0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xJd8M-fbMI0</a>
timbowhitealmost 5 years ago
Just a heads up - I tried this random query:<p><pre><code> select * from trips where pickup_latitude &lt; -74 order by pickup_longitude desc limit 10; </code></pre> Didn&#x27;t get a result set. The little box in the upper right hand corner stated &quot;Maximum number of pages (16) breached in MemoryPages&quot;
评论 #23620881 未加载
评论 #23624574 未加载
strikelaserclawalmost 5 years ago
Very impressive, i think building your own (performant) database from scratch is one of the most impressive software engineering feats. Can you let me know a little bit of how an interested person with a cs background can approach this daunting topic?
评论 #23622677 未加载
heybrendanalmost 5 years ago
Could you take a moment to comment on why one would choose to use QuestDB in lieu of Google&#x27;s Bigtable or BigQuery? Furthermore, how does this compare to other proprietary and open source solutions currently available in the marketplace? In short, I&#x27;m struggling to understand where this technology fits and the over all value proposition.<p>Asked a different way: If I were a lead of IT, director-level executive, CTO, etc. how and why should I begin to evaluate the QuestDB service &#x2F; solution?<p>Thank you for sharing, bluestreak and team, very interesting and exciting work!
评论 #23622345 未加载
tpayetalmost 5 years ago
Cool project :) How would you compare QuestDB against TimescaleDB?
评论 #23619561 未加载
jinmingjianalmost 5 years ago
sharing some thoughts here, in that I am recently developing a similar thing:<p>1. &quot;Query 1.6B rows in milliseconds, live&quot; is just like &quot;sum 1.6B numbers from memory in ms&quot;.<p>In fact, if not full SQL functionalities supported, a naive SQL query is just some tight loop on top of arrays(as partitions for naive data parallelism) and multi-core processors.<p>So, this kind is just several-line benchmark(assumed to ignore the data preparing and threading wrapping) to see how much time the sum loop can finish.<p>In fact again, this is just a naive memory bandwidth bench code.<p>Let&#x27;s count: now the 6-channel xeon-sp can provide ~120GB&#x2F;s bandwidth. Then sum loop with 1.6B 4-byte ints without compression in such processors&#x27; memory could be finished about ~1.6*4&#x2F;120 ~= 50ms.<p>Then, if you find that you get 200ms in xxx db, you in fact has wasted 75% time(150ms) in other things than your own brew a small c program for such toy analysis.<p>2. Some readers like to see comparisons to ClickHouse(referred as CH below).<p>The fact is that, CH is a little slow for such naive cases here(seen at web[1] been pointed by guys).<p>This is because CH is a real world product. All optimizations here are ten- year research and usage in database industry and all included in CH and much much more.<p>Can you hold such statement in the title when you enable reading from persistent disk? or when doing a high-cardinality aggregation in the query(image that low-cardinality aggregation is like as a tight loop + hash table in L2)?<p>[1] <a href="https:&#x2F;&#x2F;tech.marksblogg.com&#x2F;benchmarks.html" rel="nofollow">https:&#x2F;&#x2F;tech.marksblogg.com&#x2F;benchmarks.html</a>
评论 #23626109 未加载
yumrajalmost 5 years ago
A peek into the hardware specs behind this: CPU, memory, storage, network would be nice.
评论 #23621490 未加载
yingw787almost 5 years ago
Very cool and impressive!! Is full PostreSQL wire compatibility on the roadmap? I like postgres compatibility :)
评论 #23625927 未加载
greatNespressoalmost 5 years ago
Mind blowing, did not know about questDB. The back button seems broken on chrome mobile.
评论 #23625605 未加载
RedShift1almost 5 years ago
Will we be able to put data from the past into it at some point?
评论 #23625671 未加载
LordOfWolvesalmost 5 years ago
How nice of you to hijack the back-button (to return to HN after some sample queries) despite that being how myself &amp; others found this in the first place.<p>I was really liking the product up until that point.
评论 #23620713 未加载
评论 #23620733 未加载
评论 #23620728 未加载
评论 #23620754 未加载
patelhalmost 5 years ago
Any comparisons to Druid or Pinot?
woodgrainzalmost 5 years ago
I don&#x27;t know anything about this database, but this link did break the browser &lt;back&gt; button. Perhaps you can look into fixing that in a future release.
评论 #23619540 未加载
评论 #23619220 未加载