I wonder if a compositor, and possibly an entire compositing system designed around adaptive sync could perform substantially better than current compositors.<p>Currently, there is a whole pile of steps to update a UI. The input system processes an event, some decision is made as to when to rerender the application, then another decision is made as to when to composite the screen, and hopefully this all finishes before a frame is scanned out, but not too far before, because that would add latency. It’s heuristics all the way down.<p>With adaptive sync, there is still a heuristic decision as to whether to process an input event immediately or to wait to aggregate more events into the same frame. But once that is done, an application can update its state, redraw itself, and trigger an <i>immediate</i> compositor update. The compositor will render as quickly as possible, but it doesn’t need to worry about missing scanout — scanout can begin as soon as the compositor finishes.<p>(There are surely some constraints on the intervals between frames sent to the display, but this seems quite manageable while still scanning out a frame immediately after compositing it nearly 100% of the time.)