<rant><p>Our models of computer timers are woefully inadequate. These things execute billions of instructions per second. Why shouldn't we be able to schedule a timer at sub-millisecond resolution?<p>Answer: we can. But the APIs are very old and assume conditions no longer present. Or something like that. Anyway, they don't get the job done.<p>Everybody seems to start a hardware timer at some regular period, then simulate 'timer interrupts' for applications off that timer's interrupt. If you want 12.5ms but the ol' ticker is ticking at 1ms intervals, you get 13 or so depending on where in an interval you asked, it could be 12.<p>Even if nobody is using the timer, its ticking away wasting CPU time. So the tendency is, to make the period as long as possible without pissing everybody off.<p>Even back in the 1980's, I worked on an OS running on the 8086 with a service called PIT (Programmable Interval Timer). You said what interval you wanted; it programmed the hardware timer for that. If it was already running, and your interval was shorter than what remained, it would reprogram it for your short time, then when it went off it reprogrammed it for the remainder.<p>It kept a whole chain of scheduled expirations sorted by time. When the interrupt occurred it'd call the callback of the 1st entry and discard it. Then it'd reprogram the timer for the remaining time on the next.<p>It took into account the time of the callback; the time to take the interrupt and reprogram. And it achieved sub-millisecond scheduling even back on that old sad hardware.<p>And when nobody was using the timer, it didn't run! Zero wasted CPU.<p>Imagine how precise timers could be today, on our super duper gigahertz hardware.<p>But what do we get? We get broken, laggy, high-latency, late timer callbacks at some abominable minimum period. Sigh.<p></rant>