TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Revolutionary "Light Field" camera tech - shoot-first, focus-later

385 pointsby hugorodgerbrownalmost 14 years ago

29 comments

sbierwagenalmost 14 years ago
The downsides, which, of course, this press release doesn't mention:<p>- Greatly, greatly reduced image resolution. Great big dedicated-camera sized lens and image sensor, cellphone-camera sized pictures. 1680×1050, at most. (1.76MP)<p>- Color aberration. The microlenses have to be small, of course, so they're going to be made of single physical elements, rather than doublets.[1]<p>- Various amusing aliasing problems. (note the fine horizontal lines on some of the demo shots)<p>- Low FPS. Each image requires lots of processing, which means the CPU will have to chew on data for a while before you can take another image.<p>- Proprietary toolchain for the dynamic images. Sure, cameras all have their particular RAW sensor formats, but this is also going to have its own output image format. No looking at thumbnails in file browsers. Photoshop won't have any idea what to do with it. Can't print it, of course.<p>- - You can just produce a composite image that's sharp all over, but why not use a conventional camera with stopped-down[2] lens, then?<p>- It's going to be really thrillingly expensive. This is a given, of course, with new camera technology.<p>[1]: <a href="http://en.wikipedia.org/wiki/Doublet_(lens)" rel="nofollow">http://en.wikipedia.org/wiki/Doublet_(lens)</a> [2]: <a href="http://en.wikipedia.org/wiki/F/stop#Effects_on_image_quality" rel="nofollow">http://en.wikipedia.org/wiki/F/stop#Effects_on_image_quality</a>
评论 #2681846 未加载
评论 #2684090 未加载
评论 #2681927 未加载
评论 #2682099 未加载
评论 #2682044 未加载
评论 #2682727 未加载
评论 #2681821 未加载
评论 #2682336 未加载
评论 #2684249 未加载
评论 #2681803 未加载
pgbovinealmost 14 years ago
FYI Ren Ng (the founder of this company) won the 2006 ACM Doctoral Dissertation award for the research that turned into this product: <a href="http://awards.acm.org/doctoral_dissertation/" rel="nofollow">http://awards.acm.org/doctoral_dissertation/</a>
评论 #2682779 未加载
ricardobeatalmost 14 years ago
I remember seeing this "news" years ago...<p>edit: here's the article from 2005 <a href="http://graphics.stanford.edu/papers/lfcamera/" rel="nofollow">http://graphics.stanford.edu/papers/lfcamera/</a><p>The company that flourished from this research in 2008: <a href="http://www.crunchbase.com/company/refocus-imaging" rel="nofollow">http://www.crunchbase.com/company/refocus-imaging</a><p>And another startup already doing this for mobile phones: <a href="http://dvice.com/archives/2011/02/pelican-imaging.php" rel="nofollow">http://dvice.com/archives/2011/02/pelican-imaging.php</a>
评论 #2682127 未加载
schwabacheralmost 14 years ago
One awesome use for this technology is in microscopes! Instead of having to focus on each slide, slides can be run through much faster, photographed once, and interesting objects (like cells in a culture) can be found by processing afterwards.<p>And even cooler IMO, is that a display panel with proportionately sized microlenses can be used (after a little image processing) to recreate the light field for a glasses free 3d display.
ggchappellalmost 14 years ago
Very nice. But it leaves me wondering about a couple of things:<p>(1) Given the info captured by the camera, can we, without further human input, create an image in which <i>everything</i> is in focus?<p>(2) <i>What the heck are these people thinking?</i> Going into the camera business? That means that, in order to get my hands on this technology, I am stuck with whatever zillions of other design decisions they made. One product. No competition. No multiple companies trying different ways to integrate this idea into a product. And if this company goes belly-up, then the good ol' patent laws mean that the tech is just gone for more than a decade. &#60;sigh&#62; <i>Please</i> license this.<p>P.S. FTA:<p>&#62; Once images are captured, they can be posted to Facebook and shared via any modern Web browser, including mobile devices such as the iPhone.<p>Surely there must be a more straightforward, but still understandable to non-techies, way to say "the result is an ordinary image file".
评论 #2681668 未加载
评论 #2681678 未加载
评论 #2681943 未加载
评论 #2681652 未加载
评论 #2681654 未加载
评论 #2682314 未加载
评论 #2682061 未加载
EdgarZambranaalmost 14 years ago
Imagine it being combined with technology that tracks your eyes motion, focusing the part of the image you're looking at automatically.
评论 #2684306 未加载
评论 #2682362 未加载
jianshenalmost 14 years ago
Looking forward, I see interesting applications of this tech in motion graphics and film. Where 3D movies have failed in forcing the user to focus on something, I can see this bringing photos and eventually film to life in ways that let the audience control more of what they want to experience.<p>On the motion graphics side, I imagine all kinds of creative potential in compositing photography together with procedural or rendered graphics.
shadowpwneralmost 14 years ago
The ability to focus afterwards is at the tradeoff of image size and quality, assuming they use a microlens array similar to the study located here: <a href="http://graphics.stanford.edu/papers/lfcamera/" rel="nofollow">http://graphics.stanford.edu/papers/lfcamera/</a>. However, this is cleverly marketed towards the social media crowd, which has little use for high resolution photos.
gmattyalmost 14 years ago
I'm not an optics expert, but couldn't this be used to generate 3d depth maps? By stepping through each field depth you could find the edges of objects (by how clear they were at each depth) and map those edges onto a mesh. Effectively, doing what the kinect does but without any of the infrared projections...
评论 #2681758 未加载
评论 #2681741 未加载
sajidalmost 14 years ago
Raytrix already have a plenoptic camera on the market:<p><a href="http://raytrix.de/index.php/r11.185.html" rel="nofollow">http://raytrix.de/index.php/r11.185.html</a>
DanielBMarkhamalmost 14 years ago
Now just give me this in full stereoscopic, hi-res, for my cellphone. With video.<p>Of course (hopefully), that's version 4 or 5. This initial roll-out is looking great! Can't wait to play around with one of the units in the local photo shop.<p>Looking at the demos, I wonder what the depth-of-field is? Is it entirely calculable, or is it just a few feet and then the user sets the target? It looks like it is tiny, but I'm guessing it's set that way to show off the cool features of the technology.
revoradalmost 14 years ago
This sounds very exciting. To play the devil's advocate however, on most of the example photos on Lytro's site, you really only need two points of focus - roughly near and far. Clicking on those two shows you everything there is to see in a picture.<p>If someone comes up with software to allow refocusing on two distance points with existing photos, they could eat Lytro's lunch. Can Picasa do something like this?
评论 #2682388 未加载
评论 #2685396 未加载
erikpukinskisalmost 14 years ago
Once it can capture high speed video of the light field, so that you can actually change the timing and exposure of each shot, as well as the focus... then we'll really be somewhere. Then you can just aim the camera, click the button some time shortly after something cool happens, and go back and get the perfect shot. Hell, capture a 360 degree panorama and you can even aim after the fact!
sp332almost 14 years ago
This is much more useful than a simple depth map, since it works with translucent and amorphous things like steam, and other things that are hard to model with meshes like motes of dust. Also, if you have a shiny object, focusing at one depth might show the surface of the object in focus, but focusing at a different depth would show the reflection in focus.
humanfromearthalmost 14 years ago
Doesn't Magic Lantern already do this? I mean the Focus Bracketing shoots 3 (or maybe more) pictures as fast as the camera is able to do it at different focal distances. You just make sure the depth of field is wide enough to cover the distances between those points and you should have a similar effect.
ughalmost 14 years ago
It's possible to buy a light field camera right now, for example from a German compamy named Raytrix (<a href="http://raytrix.de/index.php/home.181.html" rel="nofollow">http://raytrix.de/index.php/home.181.html</a>). I don't know whether they are the only example or whether there are other companies.<p>They don't name a price on their website (write them to find out) and, looking at the applications they are naming on their website (<a href="http://www.raytrix.de/index.php/applications.html" rel="nofollow">http://www.raytrix.de/index.php/applications.html</a>), they certainly do not target consumers.<p>Here are their camera models if you are interested: <a href="http://www.raytrix.de/index.php/models.html" rel="nofollow">http://www.raytrix.de/index.php/models.html</a>
clcalmost 14 years ago
This is a very interesting concept, I would be doubly interested to see this technology used for video in camcorders. However, I'm curious to see if they have the resources available to go toe to toe with Nikon, Canon, and Olympus. The camera industry is so competitive... and the life cycles on digital cameras are so quick nowadays. They may find it difficult to keep up.
评论 #2681819 未加载
评论 #2681931 未加载
dennisgorelikalmost 14 years ago
These guys clearly target investors money, not consumers. 1) Lots of hype long before the product is released. 2) Ignoring market trend (consumers prefer smartphone integration over picture quality). 3) Instead of focusing on refining and selling technology, they want to reinvent the wheel and produce their own camera.<p>I'd say that investors would lose lots of money on that venture.
评论 #2686061 未加载
ralfdalmost 14 years ago
See also the discussion three weeks ago on hn: <a href="http://news.ycombinator.com/item?id=2596377" rel="nofollow">http://news.ycombinator.com/item?id=2596377</a><p>There is also an (unrelated) iPhone App by the inventor for playing with depth of field: <a href="http://sites.google.com/site/marclevoy/" rel="nofollow">http://sites.google.com/site/marclevoy/</a>
mikecanealmost 14 years ago
Foveon was supposed to revolutionize digital photography too. Hell, there was even a book written about it.
romansanchezalmost 14 years ago
Props to the innovation, but in terms of reaching the consumer market I doubt the appeal will suffice for widespread reach. Even if it did, a licensing deal would be more appropriate just for the sake of them investing in the innovation which is what they're good at, not distribution.
mortenjorckalmost 14 years ago
How closely is Lytro's method related to Adobe's Magic Lens demonstrated last summer? <a href="http://www.youtube.com/watch?v=-EI75wPL0nU" rel="nofollow">http://www.youtube.com/watch?v=-EI75wPL0nU</a>
DLarsenalmost 14 years ago
As a geek, I think this is totally awesome. As a casual photographer, I'm less excited. I've pretty much got the hang of focusing my shots as I'm taking them. Why focus later what I can focus now?
评论 #2681918 未加载
评论 #2682097 未加载
评论 #2681915 未加载
epoalmost 14 years ago
Another Segway? Lets see if any reviewers ever get their hands on one.
benjoffealmost 14 years ago
Combine this with eye tracking (to the level that my focal depth can be detected) and an automatic lens over my monitor and you'd have a pretty immersive picture.
SocratesValmost 14 years ago
Didn't Adobe showcased the same technology in September 2009?<p>Of course they haven't delivered a consumer product with it yet... But neither has this company.<p>Let's wait and see...
MasterScratalmost 14 years ago
I think some kind of Ken Burn effect with transitions between the different planes would make a good screensaver.
hdeoalmost 14 years ago
There may be an opportunity in 'doing something' with all the data collected ..
jaekwonalmost 14 years ago
can this light field tech be used in reverse to create 3d holographic images?