If you're interested in learning more about this stuff the head of the Pixel team (Marc Levoy - Prof emeritus at Stanford) has an entire lecture series on this stuff from a class he ran at Google. They are here along with lecture notes:<a href="https://sites.google.com/site/marclevoylectures/home" rel="nofollow">https://sites.google.com/site/marclevoylectures/home</a><p>What's really cool is you can see him talk about a lot of these ideas well before they made it into the Pixel phone
Prior work at Google Research before it made it into the product:<p><a href="https://ai.googleblog.com/2017/04/experimental-nighttime-photography-with.html" rel="nofollow">https://ai.googleblog.com/2017/04/experimental-nighttime-pho...</a><p>And by the original researcher in 2016:<p><a href="https://www.youtube.com/watch?v=S7lbnMd56Ys" rel="nofollow">https://www.youtube.com/watch?v=S7lbnMd56Ys</a>
What the Pixel cameras are doing is staggeringly good. My father is the founder of <a href="https://www.imatest.com/" rel="nofollow">https://www.imatest.com/</a>, and has a substantial collection of top-end cameras. He's probably in the top 0.0001% of image quality nerds. But most of the time, he's now entirely happy shooting on his Pixel.
> Google says that its machine learning detects what objects are in the frame, and the camera is smart enough to know what color they are supposed to have.<p>That is absolutely impressive.<p>The color and text on the fire extinguishers along with the texture detail seen in the headphones in the last picture are just stunning. Congratulations to anyone who worked on this project!
I would like super sensitive cameras like this to be used inside fridges to see the very faint glow of food going off.<p>Chemical reactions by bacteria breaking down food produce light, enough for humans to see in only the darkest of places (if you live in a city, you won't ever encounter dark enough situations).<p>A camera simulating a 1 hour exposure time in a closed refrigerator ought to be able to see it pretty easily.
This reminds me of a similar project: "Learning to See in the Dark". They used a fully-convolutional network trained on short-exposure night-time images and corresponding long-exposure reference images. Their results look quite similar to the Pixel photos.<p><a href="http://cchen156.web.engr.illinois.edu/SID.html" rel="nofollow">http://cchen156.web.engr.illinois.edu/SID.html</a>
It's notable that this 'accumulation' method effectively lets you have a near-infinite exposure time, as long as objects in the video frame are trackable (ie. there is sufficient light in each frame to see at least <i>something</i>).<p>I'd be interested to see how night mode performs when objects in the frame are moving (it should work fine, since it will track the object), or changing (for example, turning pages of a book - I wouldn't expect it to work in that case).
Damn! That's honestly impressive! I started reading thinking it was going to be a simple brightness-up kind of thing, but it's incredible how they are able to recreate the whole photography based on an initial dark raw input.<p>I must imagine that the sensor is doing an extra but un-perceptible long exposure than then is used to correct the lightning of the dark version.
This might be a weird criticism but... making photos taken in the dark look like they are not actually dark seems kind of like a weird thing to do? I've struggled with my micro 4/3 camera to capture accurate night photographs, but the last thing I wanted of them was to be brighter than I was perceiving them to be.<p>That said, the effect of some of these photographs is striking, and I'm sure the tech is interesting.
Now if we could only get this on APS-C & 1" compacts like the sony rx100 or fujifilm xf10. With first class smartphone integration and networking.
The Huawei P20 shipped out in April with this feature -- I look forward to dxomark's analysis of the Pixel 3 phone compared to the P20, which currently remains on top: <a href="https://www.dxomark.com/category/mobile-reviews/" rel="nofollow">https://www.dxomark.com/category/mobile-reviews/</a><p>Upgrading from a 3-year old Samsung S6, where I could almost see the battery percentages drop off percent by percent, the P20 Pro's 4000 mAh battery has been great (too bad the wireless charging didn't appear until the new Mate P20 Pro).
Kind of a tangent, but it was really cool to see a picture of the author's Schiit Jotunheim headphone amp in the article. One of the founders wrote an <i>amazing</i> book on building a hardware startup: <a href="http://lucasbosch.de/schiit/jason-stoddard-shiit-happened-tablet-lblb.pdf" rel="nofollow">http://lucasbosch.de/schiit/jason-stoddard-shiit-happened-ta...</a>.
The biggest challenge to do this technically is to use gyroscope data to work together with the stacking algorithm. It's hard to tune the gyro to work great for any phone. A pure software solution to analyze the perspectives transformation would be too slow.
All those shots look amazing, but they're of stationary objects.<p>I really want to know how that works for people! 99% of photos I take are of people, and the lighting is always bad.<p>Are there any photos of people?
Wouldn't video still be extremely blurry? This is mostly for things not moving / pictures<p>I wonder if this technology will eventually supercede military night vision goggles. Having the ability to add color perception at long distances could have useful for identifying things at night.
How are you going to do a review of Night Sight and not even go outside? Every photo just taken in a room with the lights turned off. Come on, man. Tell your editor he needs to wait until nightfall.
Interesting, but a tad rich with puffery.<p>Pre-OIS Google did this with image stacking which was a ghetto version of a long exposure (stacking many short exposure photos, correcting the offsets via the gyro, was necessary to compensate for inevitable camera shake). There is nothing new or novel about image stacking or long exposures.<p>What are they doing here? Most likely it's simply enabling OIS and enabling longer exposures than normal (note the smooth motion blur of moving objects, which is nothing more than a long exposure), and then doing noise removal. There are zero camera makers who are flipping their desks over this. It is usually a "pro" hidden feature because in the real world subjects move during long exposure and shooters are just unhappy with the result.<p>The contrived hype around the Pixel's "computational photography" (which seems more incredible in theory than in the actual world) has reached an absurd level, and the astroturfing is just absurd.