I always wondered why Zorin's original work [1] didn't make it to photography nor 3D rendering/gaming; we basically had 2 decades of ugly perspective distortions everybody got used to.<p>[1] <a href="http://graphics.stanford.edu/~dzorin/perception/sig95/index.html" rel="nofollow">http://graphics.stanford.edu/~dzorin/perception/sig95/index....</a>
I would have liked to see the results compared to ground truth. ie a picture of the person taken from the center the lense.<p>And i would have like to see more false-positive rejection. e.g. stuff that gets detected as a face but isn't- at the edges of photos, but really that relies on the robustness of the face detect heuristic, so it's a short and sweet heuristic that will make people look more normal at the edges of photos.
I think some of their examples are either fake or not shown in full.<p><a href="https://i.imgur.com/eRP0fZc.jpg" rel="nofollow">https://i.imgur.com/eRP0fZc.jpg</a><p>Look at the top-right corner.
Going into this paper, i expected they would be correcting for barrel distortion. I was disappointed to read that they instead:<p>>we formulate an optimization problem to create a content-aware warping mesh which locally adapts to the stereographic projection on facial regions, and seamlessly evolves to the perspective projection over the background.<p>Reminds me of this person who wrote a youtube tutorial with the factually incorrect title "Manually correct perspective in Photoshop". He did not correct the (already correct) perspective, but rather selectively distorted parts of scenery photographs so bridges looked "vertical". In fact, he was making those parts of the image no longer conform to the mathematical photo projection. Instead of picking a better-suited image projection, that video and this paper selectively fudge parts of the image to use a different projection.<p><a href="https://youtu.be/BocAGkS8yRQ" rel="nofollow">https://youtu.be/BocAGkS8yRQ</a>