Could this be used for completely automated exemption of objects based on the focus range? I think, with a clever algorithm which analyzes the sharpness of all these layers it migght be possible.
I don't know if this technique also expands to moving images, but if so, maybe it could also be used to automatically composite those. Without the need of a green-screen whatsoever. Basically, you would be separating the image layers based on distance instead of chroma.
Reminds me of that multi-lens camera Adobe was playing with some years ago.<p><a href="http://visualnary.com/2008/04/13/lens-that-takes-multiple-pictures-at-once.html" rel="nofollow">http://visualnary.com/2008/04/13/lens-that-takes-multiple-pi...</a>
<a href="http://visualnary.com/2008/04/13/nab-predictions.html" rel="nofollow">http://visualnary.com/2008/04/13/nab-predictions.html</a>
The initial applications of this are interesting, but it's what comes AFTER that will be really cool.<p>This is the capture part of the capture and display of true 3D images.<p>What do I mean by 'true'? Imagine a screen that works like a window.<p>If you think about a window or a mirror as a display screen, you can imagine that every point on the screen is a tiny hemispherical lens, light exits the screen in all directions due to these lenses. By producing light in every direction (as opposed to just perpendicular to the screen + diffusion) you could let your eye decide on what to focus. Additionally such a system would be view-angle agnostic, so you could look from the side and see a wider 'view' into the scene (again noting this works for n viewers).<p>Such a display would be complex to implement, but even if you had one you'd need image capture such as Lytro is providing to make it work.<p>Exciting times!
Anyone know anything else about the company? Founders, investors, etc? The only thing I could dig up is that Manu Kumar has Lytro's Twitter account on his "portfolio" list [1] and that the domain was, interestingly, created in 2003. Formerly known as "Refocus Imaging".<p>EDIT: And a few job listings [2] [3].<p>[1]: <a href="https://twitter.com/#!/ManuKumar/portfolio/members" rel="nofollow">https://twitter.com/#!/ManuKumar/portfolio/members</a><p>[2]: <a href="http://www.indeed.com/q-Lytro-l-Mountain-View,-CA-jobs.html" rel="nofollow">http://www.indeed.com/q-Lytro-l-Mountain-View,-CA-jobs.html</a><p>[3]: <a href="http://www.jobnum.com/Manufacturing-jobs/296905.html" rel="nofollow">http://www.jobnum.com/Manufacturing-jobs/296905.html</a>
Couldn't you do something like this with Kinect? Depth information is all one needs to get a plausible "software focus" effect. Not sure how effective it would be outdoors though.
Their method captures a light field instantaneously at
the expense of spatial resolution. They place a microlens array where the film would be, followed by a sensor. Each microlens forms an disk on the sensor with angular light distribution. Developed in Mark Levoy's group <a href="http://graphics.stanford.edu/papers/lfcamera/" rel="nofollow">http://graphics.stanford.edu/papers/lfcamera/</a><p>I guess Ren Ng is behind the company as he was first author:
<a href="http://graphics.stanford.edu/~renng/" rel="nofollow">http://graphics.stanford.edu/~renng/</a><p>This method works nice for photography where all the dimensions involved are much bigger than wavelength.
I work on something like that in fluorescence microscopes.
I can tell you, it is much harder when you have to
consider wave optics.<p>Here is a related talk:
<a href="http://www.youtube.com/watch?v=THzykL_BLLI" rel="nofollow">http://www.youtube.com/watch?v=THzykL_BLLI</a>
If your interested in this and you have an Iphone you could try this app: <a href="http://sites.google.com/site/marclevoy/" rel="nofollow">http://sites.google.com/site/marclevoy/</a><p>It captures a video wihle you move the Iphone camera and combines it into an image that looks like it was captured with a big aperture.<p>I don't have an Iphone and have never seen this app in real live, though.
What they are doing is simply grabbing frames from DSLR video; a short 1-2 second video recording with manual focus shifting from one subject to the other, and just saving a number of frames ripped out of that short video clip.
Kind of interesting. Somebody has to do it I guess. But its not as flashy as they think - refocus the picture? Or focus right the 1st time I guess. Zoom? Enough megapixels and what else? Nothing I suppose.<p>We've seen some really interesting stuff on HN about tracking thru crowds, reconstructing images from fragments etc. If these folks can do anything like that, they aren't showing it.