<i>> The concept was simple to understand; the GPU would examine several real-time frames of a game and analyze the construction of the images using the NPU. Next, the image’s resolution would be reduced, and a ray-traced version of the analyzed image would be generated. Then, the new AI-created frames ray-traced image would be scaled back up and presented to the monitor.</i><p>This is not simple to understand, at least not in the way it is described here. See <a href="https://en.wikipedia.org/wiki/Deep_learning_super_sampling" rel="nofollow">https://en.wikipedia.org/wiki/Deep_learning_super_sampling</a> for a better description.