I'm working with a client now that has a very large Postgres database. It is currently in the terabyte range but needs to support petabytes.<p>What is the best solution for storing this data that is fast and supports very large datasets?<p>For context the product competes in a geo-spatial market and loads GPS data from a large number of vehicles that are updating every 5-10 seconds.<p>We are considering Apache Pino but I am curious what the HN community would recommend here.<p>Thank you for any input!!
We'd need a lot more info to make a meaningful suggestion, but I'd at least investigate TimescaleDB to see if it fits. The fact it sits on postgres should be attractive to your client.
I would consider ClickHouse.
It is perfect for interactive analytical queries on large datasets.<p>> the product competes in a geo-spatial market and loads GPS data from a large number of vehicles that are updating every 5-10 seconds<p>There are multiple companies from this field that are using ClickHouse:
<a href="https://clickhouse.com/docs/en/introduction/adopters/" rel="nofollow">https://clickhouse.com/docs/en/introduction/adopters/</a>
I recommend Google BigQuery. Its storage is cheap ($0.02/GB) and can become even cheaper. You can process huge amounts of data quickly and pay $5 for each terabyte your query processes.<p>It's easy to use too and its version of SQL is quite powerful.<p>On AWS, there is Athena which works on data stored in S3 and has the same processing price as BigQuery ($5/TB.) However, from my experience, I recommend BigQuery.
If you want an open-source solution, would recommend HBase or Cassandra -- those have been battle-tested and used in a variety of small and large companies.<p>They allow you to store huge amounts of data, and as long as you design the primary key properly, allow you to make really fast queries to find the needle in the haystack (milliseconds) as well.<p>There are some tradeoffs of course: most engineers I've worked with who come from RDBMS to these tools find the lack of first-class support for secondary indices and SQL or SQL-like queries to be a bummer.
The large amount of data and number of vehicles seem to be naturally partitioned. In that case you could use anything you want with sharding. Or is it the case that any vehicle can read/write data for any location or perform global analytics?
Not enough information provided but if the data can be organized into meaningful partitions then S3 using Hive partition schema. Pinot should be able to consume from there as well.
Have a look at <a href="https://www.ocient.com" rel="nofollow">https://www.ocient.com</a><p>Not affiliated, but I know people who work there.