Eric Bruning gave a good talk on this kind of stuff last year at SciPy [1]. His primary work seems to be focused on 3D lightning mapping using VHF antenna arrays - a slightly different approach than trying to take simultaneous pictures from multiple locations, but you get many more datapoints, especially in west Texas [2]. The downside is, you aren't getting a "true" 3D model of a single strike - rather, a 3D model of a storm based on the samples generated from the mapping data (IIRC)<p>[1] <a href="https://www.youtube.com/watch?v=0Z17Q22HEMI" rel="nofollow">https://www.youtube.com/watch?v=0Z17Q22HEMI</a><p>[2] <a href="http://pogo.tosm.ttu.edu/about/" rel="nofollow">http://pogo.tosm.ttu.edu/about/</a>
<i>"It is immediately clear that they are taken from about the same direction but different heights: the second bolt looks squashed vertically."</i><p>Isn't it also possible that the photos were taken at slightly different times and the path of the lightning bolt shifted slightly over that period?<p>For example, if you look at this 9-second video of a high voltage electrical arc[1], you'll see that its path shifts around quite a bit over time.<p>[1] <a href="https://www.youtube.com/watch?v=euW4NerLAPg" rel="nofollow">https://www.youtube.com/watch?v=euW4NerLAPg</a>
I'm still not sure how this was done. Here's what I have gathered so far from the post: The author has two images taken unknown location. He scaled them and then marked points on each. Then he matched up points on two images manually. Now he has dx and dy for each point on one image, relative to other. Now he asserts that bigger dx means nearer to camera. So I'm thinking he takes some proportionality constant to get z = c * dx.<p>But wouldn't that produce pretty arbitrary shape depending on value of c?