<p><pre><code> ~2 msec (mouse)
8 msec (average time we wait for the input to be processed by the game)
16.6 (game simulation)
16.6 (rendering code)
16.6 (GPU is rendering the previous frame, current frame is cached)
16.6 (GPU rendering)
8 (average for missing the vsync)
16.6 (frame caching inside of the display)
16.6 (redrawing the frame)
5 (pixel switching)
</code></pre>
I'm not very familiar with graphics pipelines, but some stuff here seems wrong. If a game is rendering at 60fps, the <i>combined</i> compute time for simulation+rendering should be 16.6 ms. You can't start simulating the next tick while rendering the previous tick unless you try to do some kind of copy-on-write memory management for the entire game state. And with double buffering, the GPU should be writing frame <i>n</i> to the display cable at the same time as it's computing frame <i>n+1</i>., and the display writing the frame to its cache buffer should be happening at the same time as the GPU writes the frame to the cable.<p>By my count that's a whole 50 ms that shouldn't be there.<p>From the linked article:<p><i>One thread is calculating the physics and logic for frame N while another thread is generating rendering commands based on the simulation results of frame N-1.</i><p>Maybe modern games <i>do</i> use CoW memory?<p><i>[The GPU] might collect all drawing commands for the whole frame and not start to render anything until all commands are present.</i><p>It <i>might</i>, but is this typical behavior? This implies that the GPU would just sit idle if it finished rendering a frame before the CPU finished sending commands to draw the next one — why would it do that?<p><i>Most monitors wait until a new frame was completely transferred before they start to display it adding another frame of latency.</i><p>Maybe this is what is meant by the "16.6 (frame caching inside of the display)" item? That might be real then.