Back when I was working with shapefiles, it was the type of things that tended to be far more convenient to process in-process using something like GDAL [1] (which can operate directly on an in memory copy, gzip files, sqlite databases and far more) and query it with GDAL's SQL support, especially when build with Spatialite [2] rather than loading it into a separate database. It'd have been interesting if the author had talked about what's stopping him from that approach given he's clearly aware of GDAL and given that 130M records and a few tens of GB isn't a particularly big GIS dataset.<p>[1] <a href="https://gdal.org" rel="nofollow">https://gdal.org</a><p>[2] <a href="https://www.gaia-gis.it/fossil/libspatialite/index" rel="nofollow">https://www.gaia-gis.it/fossil/libspatialite/index</a>
Very cool to see a walkthrough with actual benchmarks. Not entirely surprised that Parquet shines here. Another big advantage of Parquet over CSV is that you don't have to worry about data integrity. Perhaps less relevant for GIS data, but not having to think about things like string escaping is rather nice.<p><i>"It would be great to see data vendors deliver data straight into the Cloud Databases of their customers. It would save a lot of client time that's spent converting and uploading files."</i><p>Hear hear! Shameless plug: this is exactly what we enable at prequel.co. If there are any data vendors reading this, or anyone who wants easier access to data from their vendor, we're here to help.<p>edit: quote fmt
For someone whose interaction with spatial data is very limited, I found the article to be a treasure trove of information.<p>Also, thanks for sharing S2! It'll be nice to look at.
I really love working with parquet and the general arrow ecosystem. The performance and cost ratios you can get out of it are really insane. AWS S3 + parquet + athena is one of the best and cheapest databases I've ever used.
Anything to go the other way? I’d like to use BQ to warehouse and be able to examine but PG to do heavy analytics due to the cost once you really start doing many repeated queries.<p>I guess I could just dump directly to CSVs and download but BQ is a nice convenient bottomless data bucket.
Nice to read this, I had a similar type of assignment 15 years ago, visualize the rollout of the fiber optics network across the city. But we had a lot less data to deal with.