I've been working on building an alternative to the Google Maps Elevation API, making use of the high-quality open elevation data released over the last few years: the 30m Copernicus global dataset, improved 1m coverage of the US and England, EU countries releasing national datasets under open licenses.<p>I'd love to know more about how HN uses elevation data and APIs. I have a 9 question survey here: https://forms.gle/1EhX4c2mLHuRTR1C9 and will share the results with the community.<p>It would also be great for people to share their usecases in this thread!
Some of the usecases for elevation data I've worked on or have seen<p>• Raster data for flood modelling<p>• Point queries comparing elevation to flood models<p>• Using elevation to improve accuracy of hyper-local weather modelling<p>• Elevation profiles of activities: like strava but for various different niches<p>• Flight planning for various aerial activities: drones, general aviation, hang gliding, paragliding, normal gliding<p>Compared to self hosting, APIs add latency and remove control. But people seem to use them to avoid dealing with multi-TB datasets, abstract away a lot of the geospatial complexity with projections and tiling and geoids, and to avoid dealing with lots of different datasets from different sources.<p>And with the Google Maps API in particular, people struggle with the high cost of course, but also the lack of providence about the data used, and the accuracy reduction of batch queries.
abstreet.org has an offline import process that combines data from OpenStreetMap, city-specific GIS datasets, and elevation into a single file. The process has to be deterministic, given the same input and code, and calling out to an external API is a non-starter. We use <a href="https://github.com/eldang/elevation_lookups" rel="nofollow">https://github.com/eldang/elevation_lookups</a>, a Python library that downloads missing LIDAR or SRTM data and uses GDAL to handle batch queries. Two issues with it are having too many dependencies (so we run it in Docker) and not being able to parallelize lookups without blowing up memory, due to some GDAL caching internals.