See also previous discussion at <a href="https://news.ycombinator.com/item?id=36562757">https://news.ycombinator.com/item?id=36562757</a><p>Unbelievably creative and impressive project, it's so much fun to see :) It's just cool that raytracing is accurate enough to make this possible.
Phenomenal work<p>It makes me wonder though: what's missing here that still puts it in the uncanny valley?<p>Do you need to deliberately deform these stock 3D models with small defects/"entropy"? Or is it simply not enough detail for the base models?
There was a similar effort in 2018 using Indigo Renderer (greetz to all who know it!), which can more efficiently render these kinds of situations using bidirectional path tracing and Metropolis-Hastings sampling: <a href="https://youtu.be/y8mKtNCq5CI?t=1712" rel="nofollow noreferrer">https://youtu.be/y8mKtNCq5CI?t=1712</a>
I thought this was going to just be some sorta custom renderer that did some sort of physical simulation. But it was literally modeling a "pinpoint camera" in Blender, then modeling lens shapes....amazing!
I'm not a 3D modeler (aside from CAD), photographer, or artist but that must be one of the most impressive things I've seen lately. Can anyone comment on how CPU and/or GPU intensive this is.
Since modern 3D renderers are already designed to render in a photorealistic way using fundamental principles of light transport, what does this technique actually offer that the renderer itself does not offer? Path tracing follows the basic laws of global illumination, after all, and the path tracing algorithm offers simulation of all different kinds of lens types, etc.