TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Lytro Lightfield Gallery

158 点作者 ideamonk大约 14 年前

10 条评论

jannes大约 14 年前
Could this be used for completely automated exemption of objects based on the focus range? I think, with a clever algorithm which analyzes the sharpness of all these layers it migght be possible. I don't know if this technique also expands to moving images, but if so, maybe it could also be used to automatically composite those. Without the need of a green-screen whatsoever. Basically, you would be separating the image layers based on distance instead of chroma.
评论 #2596511 未加载
pyrtsa大约 14 年前
Reminds me of that multi-lens camera Adobe was playing with some years ago.<p><a href="http://visualnary.com/2008/04/13/lens-that-takes-multiple-pictures-at-once.html" rel="nofollow">http://visualnary.com/2008/04/13/lens-that-takes-multiple-pi...</a> <a href="http://visualnary.com/2008/04/13/nab-predictions.html" rel="nofollow">http://visualnary.com/2008/04/13/nab-predictions.html</a>
评论 #2596606 未加载
iandanforth大约 14 年前
The initial applications of this are interesting, but it's what comes AFTER that will be really cool.<p>This is the capture part of the capture and display of true 3D images.<p>What do I mean by 'true'? Imagine a screen that works like a window.<p>If you think about a window or a mirror as a display screen, you can imagine that every point on the screen is a tiny hemispherical lens, light exits the screen in all directions due to these lenses. By producing light in every direction (as opposed to just perpendicular to the screen + diffusion) you could let your eye decide on what to focus. Additionally such a system would be view-angle agnostic, so you could look from the side and see a wider 'view' into the scene (again noting this works for n viewers).<p>Such a display would be complex to implement, but even if you had one you'd need image capture such as Lytro is providing to make it work.<p>Exciting times!
est大约 14 年前
<a href="http://lytro.com/gallery/fieldsofgold2.dat" rel="nofollow">http://lytro.com/gallery/fieldsofgold2.dat</a> <a href="http://lytro.com/gallery/lytro_50_00084.dat" rel="nofollow">http://lytro.com/gallery/lytro_50_00084.dat</a> <a href="http://lytro.com/gallery/lytro_50_00087.dat" rel="nofollow">http://lytro.com/gallery/lytro_50_00087.dat</a> <a href="http://lytro.com/gallery/lytro_50_00090.dat" rel="nofollow">http://lytro.com/gallery/lytro_50_00090.dat</a><p>custom binary format?<p>And I suppose this is the key<p><a href="http://lytro.com/include/refocus.swf" rel="nofollow">http://lytro.com/include/refocus.swf</a>
评论 #2596718 未加载
评论 #2596713 未加载
评论 #2596522 未加载
irq大约 14 年前
Anyone know anything else about the company? Founders, investors, etc? The only thing I could dig up is that Manu Kumar has Lytro's Twitter account on his "portfolio" list [1] and that the domain was, interestingly, created in 2003. Formerly known as "Refocus Imaging".<p>EDIT: And a few job listings [2] [3].<p>[1]: <a href="https://twitter.com/#!/ManuKumar/portfolio/members" rel="nofollow">https://twitter.com/#!/ManuKumar/portfolio/members</a><p>[2]: <a href="http://www.indeed.com/q-Lytro-l-Mountain-View,-CA-jobs.html" rel="nofollow">http://www.indeed.com/q-Lytro-l-Mountain-View,-CA-jobs.html</a><p>[3]: <a href="http://www.jobnum.com/Manufacturing-jobs/296905.html" rel="nofollow">http://www.jobnum.com/Manufacturing-jobs/296905.html</a>
评论 #2596734 未加载
dvse大约 14 年前
Couldn't you do something like this with Kinect? Depth information is all one needs to get a plausible "software focus" effect. Not sure how effective it would be outdoors though.
评论 #2600283 未加载
评论 #2596570 未加载
sfgfdhgfdshdhhd大约 14 年前
Their method captures a light field instantaneously at the expense of spatial resolution. They place a microlens array where the film would be, followed by a sensor. Each microlens forms an disk on the sensor with angular light distribution. Developed in Mark Levoy's group <a href="http://graphics.stanford.edu/papers/lfcamera/" rel="nofollow">http://graphics.stanford.edu/papers/lfcamera/</a><p>I guess Ren Ng is behind the company as he was first author: <a href="http://graphics.stanford.edu/~renng/" rel="nofollow">http://graphics.stanford.edu/~renng/</a><p>This method works nice for photography where all the dimensions involved are much bigger than wavelength. I work on something like that in fluorescence microscopes. I can tell you, it is much harder when you have to consider wave optics.<p>Here is a related talk: <a href="http://www.youtube.com/watch?v=THzykL_BLLI" rel="nofollow">http://www.youtube.com/watch?v=THzykL_BLLI</a>
评论 #2596996 未加载
sfgfdhgfdshdhhd大约 14 年前
If your interested in this and you have an Iphone you could try this app: <a href="http://sites.google.com/site/marclevoy/" rel="nofollow">http://sites.google.com/site/marclevoy/</a><p>It captures a video wihle you move the Iphone camera and combines it into an image that looks like it was captured with a big aperture.<p>I don't have an Iphone and have never seen this app in real live, though.
评论 #2597120 未加载
评论 #2597103 未加载
hackermom大约 14 年前
What they are doing is simply grabbing frames from DSLR video; a short 1-2 second video recording with manual focus shifting from one subject to the other, and just saving a number of frames ripped out of that short video clip.
评论 #2596440 未加载
JoeAltmaier大约 14 年前
Kind of interesting. Somebody has to do it I guess. But its not as flashy as they think - refocus the picture? Or focus right the 1st time I guess. Zoom? Enough megapixels and what else? Nothing I suppose.<p>We've seen some really interesting stuff on HN about tracking thru crowds, reconstructing images from fragments etc. If these folks can do anything like that, they aren't showing it.
评论 #2596463 未加载
评论 #2596892 未加载
评论 #2596571 未加载
评论 #2596470 未加载