It depends on your definition of "lensless".<p>You know those kids' books that have the bumpy plastic coating and when you turn the book one way you see one image - look at it from a different angle and you see another image?<p>This is the same concept. They have a bumpy plastic coating that sends the incoming light in different directions. They do some processing on standard images to determine how the scattering works and then use that scattering pattern to reconstruct new images.<p>I would view the bumpy coating as a myriad of lenses that change the character of the incoming light.<p>We have one of those windows in our bathroom with the glass that is warped so that it breaks up the light so much that it gives you privacy. I've often thought that it would be a fun project to create a camera system that you could calibrate to decrypt that scattered image by placing a known image behind the window and pre-determining how the light waves are refracted. It's cool to see that someone implemented something similar.
Brilliant and amazing. "This is a very powerful direction for imaging, but requires designers with optical and physics expertise as well as computational knowledge." It’s a little crazy to me how much can be accomplished by tackling hard problems in one domain by leveraging ideas and expertise from a seemingly unrelated[1], unexpected domain. Specialization is at the same time super important and a potential bottleneck to innovation. This fascinates me.<p>[1]Not that physics expertise in imagery is unrelated, but I feel like it’s being used in very non-traditional ways here.
Here’s the research paper:
<a href="https://pdfs.semanticscholar.org/9cff/c0c80b1ae3c1b773b761f37c66e58890639e.pdf" rel="nofollow">https://pdfs.semanticscholar.org/9cff/c0c80b1ae3c1b773b761f3...</a>
This is really neat, it seems to me like they use a see through material of some sort that scatters light randomly as a filter in front of the camera. They then move around a small light and use that to figure out the pattern that the light is scattered in by the material?
Can someone explain why they can't apply the same reconstruction technique to the data that would be captured without a diffuser; i.e. why the diffuser is required?
This makes the Holographic_imager a reality, hooray!<p>[1] <a href="http://memory-alpha.wikia.com/wiki/Holographic_imager" rel="nofollow">http://memory-alpha.wikia.com/wiki/Holographic_imager</a><p>To me this is the most major breakthrough I've heard in the recent years, which can and will hopefully affect everything. Using the extra CMOS Chip on your flagship smartphone will allow for taking 3D and soon Holographic pictures!<p>How Amazing! I remember there was an AI trained to turn 2D pictures into 3D [2], combining that with the NPU Chip on smartphones can truly make this happens very soon.<p>[2] <a href="http://www.dailymail.co.uk/sciencetech/article-4904298/The-AI-turn-selfie-3D-image.html" rel="nofollow">http://www.dailymail.co.uk/sciencetech/article-4904298/The-A...</a>
The Research article (PDF):
<a href="https://www.osapublishing.org/DirectPDFAccess/3ADDF00A-E071-C39B-B3DA82E301EA25C6_380297/optica-5-1-1.pdf?da=1&id=380297&seq=0" rel="nofollow">https://www.osapublishing.org/DirectPDFAccess/3ADDF00A-E071-...</a>
This could also be used to analyze some material used as the diffuser.<p>The 'shape' of the caustics captured by the sensor with a given electromagnetic 'lightsource' can probably yield some interesting information regarding the diffuser. Kinda like an spectrograph works.
I wonder if unscrewing the lens on a cheap USB camera could lead to some interesting pictures.<p>Any lensless camera Open Source project around?<p>EDIT: A library, not the camera itself
TL;DR: this is an approach that simplifies the production of light field cameras (cameras that measure both color and angle of incoming light beams): instead of building a grid of microscopic lenses, you use a "random" piece of opaque plastic like scotch tape, and figure out how it modifies incoming light using a calibration phase.
Article and especially the title are of pretty low quality.
Assuming the voxels are surface voxels, how is it even theoretically possible to turn 1 million pixels into 100 million voxels? This means you get 100x the xy resolution AND depth information out of this process. I'm sceptical of that claim.<p>As crusso already mentioned lenses and scanning are essential parts of image capture, a lens being needed to direct the light to the sensor somehow and scanning to actually read out the image.
"Using diffuse foils to replace microlens arrays" would probably be a more fitting and still teasing headline. Or "Diffuse foils can replace microlens arrays for 3D imaging", perhaps.<p>Article aide, the research seems very sound and very cool. It demonstrates another case of extracting high quality information from low quality sensors - something I think we'll be seeing a lot more of.
Another previously precisely manufactured piece of hardware is being replaced by a software-supported low-quality part through optimizations that in their spirit remind me of the Google Pixel's camera and that drone that can fly (steer) with one rotor.