I agree with most of the statements - but one aspect I think a lytro camera is better at: street photography.<p>Quick, shots of street moments. Shots where you have no time to focus properly, but want to focus the field of view on a subject...a lightfield camera is great for these situations and I think isn't played up enough.
I took a class that explained light field photography, but I thought understanding the concepts through static diagrams was difficult. I made a little Mac app that simulates a 2D scene of light sources, lenses, and light-field sensors. The sensors shows their captured output in a graph.<p>The source is at <a href="https://github.com/bridger/optic-workbench" rel="nofollow">https://github.com/bridger/optic-workbench</a><p>A bit of explanation (and a link to the pre-built binary) is at <a href="http://www.bridgermaxwell.com/index.php/blog/optic-workbench-light-field-capture/" rel="nofollow">http://www.bridgermaxwell.com/index.php/blog/optic-workbench...</a>.<p>(Sorry it is only for Macs. I would love to redo it in Javascript someday, once I know Javascript.)
The problem with lightfield photography, other than the inherent image quality calculus of splitting a sensor into a bunch of lower-resolution, lower-illumination, differently-focused subfields, is that it's so easily simulated if anyone actually attempts to do scene capture properly - through adaptations of stereo-vision algorithms for higher-N arrays of semi-calibrated sensors, or near-infrared structured light (like the Kinect) for a 4D scene, or structure from motion for a static 3D scene.<p>The new HTC One M8 uses crap smartphone sensors, <i>only two of them</i>, without any structured light, with a first-generation stereo-blurring algorithm dumb enough to run quickly on a smartphone, and it produces a rough approximation of the same product as Lytro.
The thing that excites me most about Light Field Photography is that LF cameras are the only thing that could take a photo that would actually feel at home in an Oculus Rift. Binocular cameras fake the 3D experience by capturing two 2d images and fixing them to each eye. But in the Oculus you can move your head around so two 2d images aren't enough. You need an actual depth map in order to generate images in response to head movements. LF cameras can provide one.<p><a href="http://www.cs.berkeley.edu/~ravir/lightfield_ICCV.pdf" rel="nofollow">http://www.cs.berkeley.edu/~ravir/lightfield_ICCV.pdf</a>
I'm not sure that consumers don't care about bokeh, they just don't know how to achieve it. I've been using the Google camera app that does lens blur and everyone I show it to loves it. It's a little cumbersome to use though and require a still subject. Maybe light field could be one of the features that keep compact cameras around in the face of competition from phones.<p>I also wonder if there are computer vision applications. I don't know how accurate depth fields from stereoscopic images are, and maybe two cameras are not always practical...
I don't see the $1.5k Lytro is going to gain enough market share to be profitable too. The only option I see for Lytro is to miniaturize their technology so that it is significantly better than Google's software offering and to partner with a major phone manufacturer.
Could LFP be used to 3D-map a scene - assuming you can extract the data from their proprietary format? It would be great to be able to turn a collection of photos into a virtual world.
> "Holographic Recording ... remains irrelevant to consumers until we have convenient holographic displays to match."<p>I'm not sure why Lytro hasn't touted this side of the technology... the first consumer holographic display (oculus rift) is due relatively soon.