This arguments only considers photography as a tool that produces images, while ignoring that it is also a tool that records reality.<p>As an image production apparatus, you can replace photography by machine learning (or really any other image creation process, drawing, painting, raytracing, whatever), and in these cases yes, photography becomes a little less important.<p>However, a photograph is not only an image, it is a trace of actual photons that existed and imprinted a photographic material. When you look at a photograph, you're not only looking at a picture, you're also looking at the imprint of reality. It's the same emotional effect as looking at your kid's hand print on paper, or wearing your grandmother's ring. There is affect involved that comes from the reality of the experience.<p>A souvenir picture is not only an image that helps you remember the time when you and your friends were doing something, it's an actual fossil of that moment. And that's what matters in photography. And that's why it's not going away any time soon.
This article is lame, and would be unable to generate an accurate picture of the actual scene. There would also be loss of detail, missing objects, missing people, etc. It would feel immensely artificial, too.<p>I was also hoping for something more ambitious, such as "once you can detect the full electromagnetic spectrum hitting your phone from any angle, you won't need a lens to reconstruct a focused picture," as mentioned by blixt.<p>At a minimum, I was hoping for conceptualization and theorized advancement of the current display-pixel-as-camera-pixel train of thought.
I was hoping for something more ambitious like "once you can detect the full electromagnetic spectrum hitting your phone from any angle, you won't need a lens to reconstruct a focused picture." The proposed idea is very tedious and requires a tremendous amount of up-to-date data (just the open state of a door can seriously affect the lighting conditions in a photo).
The article is a great imagining of our grim technological future. A family on vacation crowds in front of the Parthenon among thousands of other eager tourists posing for pictures in the rain. Sweaty, ruffled hair, exhausted smiles (except for Suzie), the photo is sent to Google for processing. The result is spectacular: crowds gone, scaffolding removed, sun high in the sky, clothes unwrinkled, smiles whitened, acne softened, and everyone's face captured at just the right moment. The $2.99 GPets add-on let's you insert the family parrot into the picture too. Years later over dinner the family reminisces about how fun that sunny day at the Parthenon was.<p>Our memory is fragile and our eyes easily deceived. I'm hopeful this technology is introduced to the public first, so other institutions aren't able to abuse it before most people understand how untrustworthy pictures are.
It strikes me that, in a world where "photos" are generated from non-optical data, there would likely be a counter-movement of artists who deliberately seek or build scenes that could not be generated at the time. (In some ways this is just a continuation of our current novelty-seeking; "interesting" images are those that you couldn't simply find on the internet.)<p>I don't disbelieve for a second that the expressiveness and realism of automatically-composed images won't have a huge explosion and become part of popular media. But I think there will be very long period during which some datasets will be much more salient than others. Think of how many orders of magnitude photos we have of dogs and cats compared to frogs and yaks... can they take the camera off of a product that can't image my friends petting a yak? I'm very curious when is enough, because the human ability to contrive absurd scenarios is frustratingly extensive, and unique "outlier" moments are exactly the ones many people want to capture.<p>Also I think I _have_ seen a similar art project like this, where the artist took a photo album of Paris(?) and gave people "cameras" that just identified the closest image location by GPS and would just keep regurgitating that image no matter how many you took. I can't seem to find it now...
Yeah, like when I take pictures to remember what my kids were doing that day, really all that must be done is to extrapolate their stage of growth using their birthday (derived from publicly available databases), infer their likely location and pose from the sounds they are generating, etc. So much simpler than a CCD!
This doesn't make sense to me.. is the article suggesting a picture is created from all previous pictures taken? This would mean 1. No more new pictures in time of how things have changed (a new building is where the car park was) and 2. My picture of thatbird that just landed on that branch is not quite going to turn out.. hell the tree might not even be in the picture..
If such a database existed, I think there'd be vastly more interesting use cases and applications implied than 'replacing an extremely specific minimal subset of photography' - for example, seamless visual world-travelling in VR?
It's a fascinating idea. But at the same time all the interesting photos aren't able to be captured this way.<p>Looking at my favorite photos I've taken, there are:<p>- Some photos of Iran (Google, being a US company, wouldn't operate there)<p>- The inside of a crashed plane<p>- A closeup of a dying hummingbird<p>- A laser that I use personally<p>- A boat adrift in the middle of the sea<p>All of these moments couldn't have been taken in this method, even with a boatload of content-aware scale and other people's photos stitched together.
This doesn't seem like something people would want.<p>For example: Banksy just tagged a building downtown. You rush there to get your picture in front of it. You can't take a photo of yourself with it because it's not yet in any of the databases used by your quasi-camera to stitch together photos.<p>It seems like the world is too variable for something like this to function practically for any but the most boring photos.
What's described in this article is some kind of automated postcard production: when you're at a tourists' attraction, just press a button on your phone and it produces an image of the location (with or without an insert of your face) based on a library of images.<p>But that's not really "photography" (it's to photography what miniature plastic Eiffel towers are to architecture).<p>A much more interesting and much more futuristic approach would be a device capable of recording a whole scene without a conventional sensor and lens; something that would record photons not because it's hit by them, but because it knows where they are.<p>So this device would record all the light waves's position in a scene, at a given instant (the instant the image is taken), and then it would let you later reconstruct / produce any image from any position in the scene, with any kind of focus or bokeh, or whatever.<p>It would also let you walk into the scene like in a real "mannequin challenge", etc.
This is sort of like how the Star Trek Holodeck works as a content creation tool.<p>By modern definitions, the Holodeck clearly has the functionality of a 3D modeling tool and game engine: the crew frequently uses it to build VR prototypes of objects, situations and entire entertainment experiences.<p>It doesn't offer anything that we would recognize as modeling or animation tools, though. Instead the user describes objects and settings using any degree of precision -- "a steel table 3 meters long", "19th century London" -- to get a starting point, then iteratively adjusts the result by instructing the system.<p>(Nobody takes selfies on the Enterprise either. I guess they've lost their appeal when you can always just ask the computer: "Show me myself and Geordi smiling against a wall when we visited the Klingon High Council last year.")
<i>If you insist on inserting yourself or family members into the happy tourist snap, that shouldn't be too much of an imposition, either: just take a bunch of selfies in advance and the software will stitch the two images together for you.</i><p>So what's the endgame for selfie-takers?
There was a raspberry pi project with a screen, GPS, 3G which was a camera without a lens. It would look up the closest Flickr picture to your current location and possibly orientation from metadata and add it to your collection. Can't seem to find it though.
so instead of snapping a picture, you just select one from the set of pre-made professional ones. wait we've been able to do this for decades, and yet people still take their own pictures. this is also a bastardization of what google's gcam technology is doing. It stitches together images at various exposure rates to get what would otherwise require a massive sensor, or a super steady hand, it's not creating some deep learning based frankenimage, as this article suggests.
If you can synthesize an experience, why bother having any real experience?<p>For that matter, if we can simulate the lighting, etc. of an object, to the finest detail, why do we need the object itself?