TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

3D Lightning Reconstruction (2013)

76 pointsby nkronabout 11 years ago

4 comments

kastnerkyleabout 11 years ago
Eric Bruning gave a good talk on this kind of stuff last year at SciPy [1]. His primary work seems to be focused on 3D lightning mapping using VHF antenna arrays - a slightly different approach than trying to take simultaneous pictures from multiple locations, but you get many more datapoints, especially in west Texas [2]. The downside is, you aren&#x27;t getting a &quot;true&quot; 3D model of a single strike - rather, a 3D model of a storm based on the samples generated from the mapping data (IIRC)<p>[1] <a href="https://www.youtube.com/watch?v=0Z17Q22HEMI" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=0Z17Q22HEMI</a><p>[2] <a href="http://pogo.tosm.ttu.edu/about/" rel="nofollow">http:&#x2F;&#x2F;pogo.tosm.ttu.edu&#x2F;about&#x2F;</a>
greenyodaabout 11 years ago
<i>&quot;It is immediately clear that they are taken from about the same direction but different heights: the second bolt looks squashed vertically.&quot;</i><p>Isn&#x27;t it also possible that the photos were taken at slightly different times and the path of the lightning bolt shifted slightly over that period?<p>For example, if you look at this 9-second video of a high voltage electrical arc[1], you&#x27;ll see that its path shifts around quite a bit over time.<p>[1] <a href="https://www.youtube.com/watch?v=euW4NerLAPg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=euW4NerLAPg</a>
评论 #7704200 未加载
评论 #7703002 未加载
sytelusabout 11 years ago
I&#x27;m still not sure how this was done. Here&#x27;s what I have gathered so far from the post: The author has two images taken unknown location. He scaled them and then marked points on each. Then he matched up points on two images manually. Now he has dx and dy for each point on one image, relative to other. Now he asserts that bigger dx means nearer to camera. So I&#x27;m thinking he takes some proportionality constant to get z = c * dx.<p>But wouldn&#x27;t that produce pretty arbitrary shape depending on value of c?
评论 #7703073 未加载
评论 #7704215 未加载
评论 #7706839 未加载
dllthomasabout 11 years ago
I like how it has a shadow.