TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: What is the best data storage solution for BIG data and fast queries?

7 pointsby harrisreynoldsover 3 years ago
I&#x27;m working with a client now that has a very large Postgres database. It is currently in the terabyte range but needs to support petabytes.<p>What is the best solution for storing this data that is fast and supports very large datasets?<p>For context the product competes in a geo-spatial market and loads GPS data from a large number of vehicles that are updating every 5-10 seconds.<p>We are considering Apache Pino but I am curious what the HN community would recommend here.<p>Thank you for any input!!

9 comments

stocktechover 3 years ago
We&#x27;d need a lot more info to make a meaningful suggestion, but I&#x27;d at least investigate TimescaleDB to see if it fits. The fact it sits on postgres should be attractive to your client.
zX41ZdbWover 3 years ago
I would consider ClickHouse. It is perfect for interactive analytical queries on large datasets.<p>&gt; the product competes in a geo-spatial market and loads GPS data from a large number of vehicles that are updating every 5-10 seconds<p>There are multiple companies from this field that are using ClickHouse: <a href="https:&#x2F;&#x2F;clickhouse.com&#x2F;docs&#x2F;en&#x2F;introduction&#x2F;adopters&#x2F;" rel="nofollow">https:&#x2F;&#x2F;clickhouse.com&#x2F;docs&#x2F;en&#x2F;introduction&#x2F;adopters&#x2F;</a>
ammar_xover 3 years ago
I recommend Google BigQuery. Its storage is cheap ($0.02&#x2F;GB) and can become even cheaper. You can process huge amounts of data quickly and pay $5 for each terabyte your query processes.<p>It&#x27;s easy to use too and its version of SQL is quite powerful.<p>On AWS, there is Athena which works on data stored in S3 and has the same processing price as BigQuery ($5&#x2F;TB.) However, from my experience, I recommend BigQuery.
samspencover 3 years ago
If you want an open-source solution, would recommend HBase or Cassandra -- those have been battle-tested and used in a variety of small and large companies.<p>They allow you to store huge amounts of data, and as long as you design the primary key properly, allow you to make really fast queries to find the needle in the haystack (milliseconds) as well.<p>There are some tradeoffs of course: most engineers I&#x27;ve worked with who come from RDBMS to these tools find the lack of first-class support for secondary indices and SQL or SQL-like queries to be a bummer.
karmakazeover 3 years ago
The large amount of data and number of vehicles seem to be naturally partitioned. In that case you could use anything you want with sharding. Or is it the case that any vehicle can read&#x2F;write data for any location or perform global analytics?
评论 #29836068 未加载
evv555over 3 years ago
Not enough information provided but if the data can be organized into meaningful partitions then S3 using Hive partition schema. Pinot should be able to consume from there as well.
prirunover 3 years ago
Recent HN topic: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29825490" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29825490</a>
ubadairover 3 years ago
Have a look at <a href="https:&#x2F;&#x2F;www.ocient.com" rel="nofollow">https:&#x2F;&#x2F;www.ocient.com</a><p>Not affiliated, but I know people who work there.
nojitoover 3 years ago
Clickhouse depending on the true size and retention requirements you have.