Except when doing a pcap, I actually want the actual time it is. Maybe in some cases you'd explicitly want an offset, but not in general. It's really a plea to get your clocks synced up, so you aren't forced to choose between reporting an incorrect time or an incorrect duration. If I'm running a pcap and the system time changes over a day by several seconds, I'd prefer each packet to report the closest concept of the right time instead of being way off as time goes by.<p>Not to mention: if the monotonic clock can keep such accurate timing, then everyone would just use that and NTP would not be so necessary.<p>Really: Under what conditions do you have a usefully functioning system when the clock is so off you need to do multi minute jumps? Even HyperV, with the utterly atrocious w32time manages to keep it with a minute or two (and a Linux guest can easily have ~ms accuracy).<p>The leap second point is valid, but that's an argument against leap seconds which serve no use in today's society other than to introduce unnecessary problems. Even Google just gives up and purposely introduces inaccuracies in their clocks for a day so that when the leap second comes around they're synced again. A leap hour would be a far better solution, as it's something many people are (unfortunately) used to from DST, and it wouldn't bother us for a dozen centuries.
I wish that HN would use the Public Suffix List (<a href="https://publicsuffix.org/" rel="nofollow">https://publicsuffix.org/</a>) in its algorithm to display domain names of submissions. That way, we wouldn’t get things like this, where the domain given (pp.se) does not say anything about what the actual site is.
OS time handling is feeble across the industry. Its inherited from 20-year-old ideas about APIs. Not just the antique time structures that are useful for rendering but not much else. Also the abominable Sleep() and such.<p>Imagine you want to do something every second. You Sleep(1000) or some such. But it takes time to do the thing, so its actually a bit longer between loops. Maybe it doesn't matter; maybe it does. But you're stuck doing stuff like that.<p>Why not Wait(timetowaitfor). Not a duration; the actual time you want to be woken up. Now it still takes time to wake up and run. And it takes time to make the call. But now, your stuff actually runs say 60 times per minute (e.g. if you wait for successive seconds), hour after hour and day after day.<p>Also, what's with limited resolution on the time? Its due to the common implementation of timers as a counter of ticks, where a tick is whatever regular interval some hardware timer is set to interrupt. Why not instead interrogate a free-running counter? And if I want to wait 1 second plus 150 nanoseconds, then I Wait for that time to arrive, and the library (or OS) set a real timer interrupt to go off when that time has arrived? Sure there's latency in calling me back; that's inevitable. What's not inevitable is some limited multi-millisecond tick resolution.<p>Anyway, whenever I'm in charge of designing an OS or application environment, I provide real timers like this. It's about time the big OS providers catch up to the 21st century.
It's a similar situation on iOS, where new developers sometimes use (in Objective-C) `[[NSDate date] timeIntervalSince1970]` which is natural, but wrong. NSDate draws from the network synchronized clock and will occasionally hiccup when re-synching it against the network, among other reasons.<p>If you're looking at measuring relative timing (for example for games or animation), you should instead use `double currentTime = CACurrentMediaTime();` That's the correct way.
Let's talk about the sad state of clocks today. There exists a few ways to query NTP time on Linux. (1) Directly through NTP (2) the adjtimex syscall, (3) the ntp_gettime call. I found it hard to find many codebases using the proper NTP. In fact codebases that need reliable time, like Cassandra and OpenLDAP. don't use NTP time APIs to check whether the system clock is in sync, or to get accurate time. Even if we were to make PTP accessible to the world, it would be some time before its usage actually became ubiquitous. The understandability of time keeping, and clock yielding in our community is a sore point.
I think the article and much of the discussion misses a larger point -- time is hard, and very very hard when there are multiple systems with different clocks. The APIs are the way they are because there just aren't solutions especially since all systems ultimately have unreliable connections to good time sources.<p>The miserable APIs are New Jersey/Worse-is-better answers to intractable problems.
the semantics you'd like the OS + standard library to provide would be some kind of gettime() call that returns a time thingie, and a secondsbetween(a,b) call that reliably tells you the time between the two time thingies.<p>the fact that it doesn't already work this way is a design fail.<p>all the nonsense about NTP and clock slew and monotonicity are implementation details that should be hidden below this layer.
the last time I tested them on redhat 6, clock_gettime(REALTIME) and gettimeofday are slightly faster than clock_gettime(CLOCK_MONOTONIC), and gettimeofday is much faster than any clock_gettime on older platforms.