I tested my own system using Bruce's "measure_interval.cpp" program (on Windows 1909):<p>- Slack (sometimes) sets the global timer to 1ms when it is in the foreground, but restores it in background<p>- Spotify sets the global timer to 1ms, no matter what. Even if it isn't playing.<p>- Skype sets 1ms, if started at Startup (which it defaults to), even though I am logged out and it just has a tray icon. But when I manually start it, it doesn't (always) set it to 1ms.<p>- VSCode will set it to 1ms when you are interacting with it, but will eventually revert to 15.6ms if left alone (even if it is still in foreground).<p>- Firefox doesn't appear to set it (on its own; I presume that if I opened a tab that was using a low setTimeout or requestAnimFrame it might).<p>Spotify is interesting. A lot of people probably have that app, and since it sets 1ms unconditionally, it would have been setting fast-timer mode prior to the 2004 update, which could inadvertently "speed up" whatever games people were running.<p>That includes my own game, which uses a foreground sleep of as low as 1ms to try to hit its time target, and I don't call timeBeginPeriod. I guess I'll find out when I get the 2004 update.
> A program might depend on a fast timer resolution and fail to request it. There have been multiple claims that some games have this problem (...)<p>Yup, I wrote such a (small, freeware) game 15+ years ago. I wasn't aware of timeBeginPeriod at the time, but I observed that for some inexplicable reason, the game ran more smoothly when Winamp was running in the background. :-)
<rant><p>Our models of computer timers are woefully inadequate. These things execute billions of instructions per second. Why shouldn't we be able to schedule a timer at sub-millisecond resolution?<p>Answer: we can. But the APIs are very old and assume conditions no longer present. Or something like that. Anyway, they don't get the job done.<p>Everybody seems to start a hardware timer at some regular period, then simulate 'timer interrupts' for applications off that timer's interrupt. If you want 12.5ms but the ol' ticker is ticking at 1ms intervals, you get 13 or so depending on where in an interval you asked, it could be 12.<p>Even if nobody is using the timer, its ticking away wasting CPU time. So the tendency is, to make the period as long as possible without pissing everybody off.<p>Even back in the 1980's, I worked on an OS running on the 8086 with a service called PIT (Programmable Interval Timer). You said what interval you wanted; it programmed the hardware timer for that. If it was already running, and your interval was shorter than what remained, it would reprogram it for your short time, then when it went off it reprogrammed it for the remainder.<p>It kept a whole chain of scheduled expirations sorted by time. When the interrupt occurred it'd call the callback of the 1st entry and discard it. Then it'd reprogram the timer for the remaining time on the next.<p>It took into account the time of the callback; the time to take the interrupt and reprogram. And it achieved sub-millisecond scheduling even back on that old sad hardware.<p>And when nobody was using the timer, it didn't run! Zero wasted CPU.<p>Imagine how precise timers could be today, on our super duper gigahertz hardware.<p>But what do we get? We get broken, laggy, high-latency, late timer callbacks at some abominable minimum period. Sigh.<p></rant>
I once spent ages trying to determine why a Python unit test that sorted timestamps constantly failed on Windows. In the test, we compared the timestamps of performed operations, and checked to confirm that the operations happened in sequence based on their timestamp (I'm sure many of you see where this is going). On Windows, the timestamp for all the actions was exactly the same, so when sorted, the actions appeared out-of-order. It was then that I discovered Python's time library on Windows only reports times with a resolution of ~1ms, whereas on Linux the same code reports times with a resolution of ~10us. That one was actually super fun to track down, but super disappointing to discover it's not something that's easily remedied.<p>(For those about to suggest how it should have been done, the application also stored an atomic revision counter, so the unit test was switched to that instead of a timestamp.)
At work we have an application that calls `timeBeginPeriod(1)` to get timer callbacks (from `CreateTimerQueue`) firing at 5ms resolutions but we are not seeing the behaviour described in the article. We observe no change to the timer resolution after calling `timeBeginPeriod(1)`, which unfortunatly is a breaking change to our app.<p>The lack of information and response from Microsoft on this has been quite frustrating.
Ah yes, reminds me how on my previous project I was in charge of writing a server to mix audio for multiple clients in real time. The server worked well on my local Windows 10 machine, but when deployed to a cloud instance of Windows Server 2016 it ran very very poorly, just barely quickly enough to process data in time.<p>That's when I discovered that doing a "process more data if there is any, if not - sleep(1)" loop is a <i>very</i> bad way of doing it, as on Windows Server 2016 "sleep(1)" means "sleep 16ms". It all worked fine once the timer resolution was changed to 1ms, but yeah, the default value will screw you over if you have anything this time sensitive and are using sleeps or waits on windows.
Seems like this was reported over 4 months ago! [1]<p>[1] <a href="https://developercommunity.visualstudio.com/content/problem/1093078/timebeginperiod-function-dont-change-anymore-the-r.html" rel="nofollow">https://developercommunity.visualstudio.com/content/problem/...</a>
Author might want to disable Wordpress "pingback" feature, as it seems abused. WTF is that, it seems bots are copying content, swapping random words and reposting on some generic looking sites..? What's even the purpose of this?
From a technical point of view, this is an interesting change, and I'm not sure if it's a bug or not. From a scientific point of view, I definitely bristled at "cleaned up to remove randomness"... :P
<i>It shouldn’t be doing this, but it is</i><p>In my opinion this still remains the conclusion, as it has been for the past decades. I cannot remember when I read a bit on Sleep() behavior and timeBeginPeriod() but I remember that what I read was enough to make clear you just shouldn't rely on these (unless you're 100% sure the consequences are within your spec and will remain so), also not because the workarounds are also widely known (IIRC - things like using WaitForSingleObject if you need accurate Sleep).
About the game fixing utilities, while it is annoying that these wont work at the moment, they should still be able to work by installing a hook that attaches itself to the game's process and calls timeBeginPeriod (several other unofficial game patches work like this already).
> and the timer interrupt is a global resource.<p>Shouldn't this at least be per-core rather than global? Then most cores can keep scheduling at a low tick rate and only one or two have to take care of the jittery processes.
That's an interesting read. I recall reading about some airline communication system that used to freeze when a 32 bit counter in Windows overflowed. Would the way windows implements timer interrupts have anything to do with this.
This seems deliberate... It's trying to prevent one application 'randomly breaking' when another application is running.<p>Seems like a good move to me - just a bit of a shame a few applications might break.
> One case where timer-based scheduling is needed is when implementing a web browser. The JavaScript standard has a function called setTimeout which asks the browser to call a JavaScript function some number of milliseconds later. Chromium uses timers (mostly WaitForSingleObject with timeouts rather than Sleep) to implement this and other functionality. This often requires raising the timer interrupt frequency.<p>Why does it require that? Timeouts should normally be on the order of minutes. Why does Chrome need timer interrupts to happen many times per second?