Denoising is one of those machine learning applications where it just seems to work too well. I was thinking about the idea of using machine learning for denoising a few weeks ago and then I stumbled on Nvidia’s work in the area and was totally blown away. The strategy Nvidia are adopting for their GPU where they utilise both ray tracing and deep learning cores for real-time applications is excellent.
<i>Surprisingly [sincosf fusion] only happened with -Ofast and not with -O3.</i><p>As noted, -Ofast turns on -ffast-math which turns on -funsafe-math-optimizations which "enables optimizations that allow arbitrary reassociations and transformations with no accuracy guarantees."[0] In this case, sincosf by itself is probably <i>more</i> accurate.<p>[0] <a href="https://gcc.gnu.org/wiki/FloatingPointMath" rel="nofollow">https://gcc.gnu.org/wiki/FloatingPointMath</a>
Yep, Optix is so good that OctaneRender has dropped their Vulkan backend and are now using Optix 7 instead.<p>There is a talk from GTC 2020 how they went about it.
Is Optix significantly better than the traditional (presumably faster) approaches used by photo editing software like Lightroom and Rawtherapee? Denoising is of course an extremely well-studied image processing problem with a highly developed state-of-the-art. I haven't looked at comparison recently, but my recollection is that the answer was "no" as of about a year ago.