He quotes that you need a lot of points and a fancy transformation to correct images while taking into consideration the differences in elevation within the scene. While it is true that having more points is important, the better way is to actually also consider the elevation of the identified points using a digital elevation model (DEM). That increases the accuracy of the transformation a lot and reduces the number of points needed. The idea is that you build a transformation from R^3 -> R^2 instead of just R^2 -> R^2, usually a rational polynomial function.<p>If anybody is interested the word to search for is orthorectification.<p>Shameless plug. I recently published a post on my blog on how to calculate a projective transformation for an image if you know a few parameters of your camera (focal length and sensor size) and its position and orientation. My use case is satellite imagery so this is always available <a href="http://maxwellrules.com/math/looking_through_a_pinhole.html" rel="nofollow noreferrer">http://maxwellrules.com/math/looking_through_a_pinhole.html</a>
Despite the author's criticisms, it seems like there's lots of opportunity for UAV-generated open source imagery, but I can't really find an active community for sharing it.<p>Open Aerial Mapp[1] seems like a good start, but doesn't seem to be particularly active.<p>Seems like we could use a "Mapillary[2] but from Above" type of project - only one that doesn't end up getting acquired by Facebook.<p>[1] <a href="https://openaerialmap.org/" rel="nofollow noreferrer">https://openaerialmap.org/</a><p>[2] <a href="https://www.mapillary.com/" rel="nofollow noreferrer">https://www.mapillary.com/</a>
You can also do this to create stereo photography!<p><a href="https://en.wikipedia.org/wiki/Stereo_photography_techniques#:~:text=Longer%20base%20line%20for%20distant%20objects%20%E2%80%93%20%22Hyper%20Stereo%22%5Bedit%5D" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Stereo_photography_techniques#...</a>
> But other than taking a few photos of holiday mementos and lens-flaring sunsets, what’s the point?<p>OT, but for me the point is not having my body absolutely panic from experiencing all kinds of rotation and sudden lateral displacement without anything happening visually. Honestly, I have no idea how people fly anywhere else, I wouldn’t be able to. The speeds and forces experienced even on a calm commercial flight are, as far as human evolution goes, total nonsense.
To explain the underwhelmed response I guess most people were expecting ’google maps quality’ 3d models. Which is not an unreasonable expectation, given an aerial platform such as a drone converting photos to 3d models of large areas is commoditized. Just dump photos to an application such as Agisoft Metashape or Luma, wait a bit and you can get something like this for example: <a href="https://skfb.ly/6DvVP" rel="nofollow noreferrer">https://skfb.ly/6DvVP</a>
This is very cool! How feasible would it be to take a video instead of a photo, then using landmark detection and a stitching algorithm such as SIFT to cover a larger surveying area?
This post reminds me that I have been unable to find good data on local elevation that's not an OS map, a spheroid, or estimating from google earth.<p>Still disappointed that good free map sources are all flat, from what I can tell.
This is not photogrammetry as the word is usually understood these days.<p>Photogrammetry usually means constructing a 3D model out of a number of 2D photos from lots of different angles, although there are broader definitions as well [1].<p>This is just skewing a photo you took out the window to overlay it on a map.<p>From the title, I was expecting this to be something about constant super-hi-res photography attached to commercial flights that would actually let you build 3D models of the landscape...<p>[1] <a href="https://en.wikipedia.org/wiki/Photogrammetry" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Photogrammetry</a>