The requirements for time in computers have increased drastically since C was invented.<p>Time used to mean what we write down or see on a clock: 2024-08-13 02:27 PM. The computer had an electronic clock built in, so it could save you a little effort of looking at the clock and copying the numbers. And that was all it did. If your clock was a few minutes off, that was no big deal. People knew clocks only agreed to within one or two minutes. People knew clocks were different in far away lands. Some people knew that you have to adjust your clocks twice per year. The computer clock was just like any other clock but happened to be inside a computer.<p>Now we expect a globally synchronized unique identifier for each instant, regardless of timezone. This is hard to deliver. Computers use these for synchronization amongst themselves, so they have to be accurate to milliseconds or better. This is hard to deliver. We expect computers to handle requests from far away lands with as much grace as requests from the local operator, and deliver results in a format the people in those lands expect. This is hard to deliver. We expect computers to process requests about the past using information that was current in the past, all over the world. This is hard to deliver. We expect computers to automatically adjust their own clocks twice a year, not just on the dates everyone in your local area does, but for users in all parts of the world on their respective dates. This is hard to deliver. And we still haven't got graceful handling of completely different calendar systems.
I fail to see TFA's concerns or take them very seriously.<p>> time() unnecessarily takes a pointer argument to write to<p>Minor cosmetic issue.<p>> strftime() has to write to a string of a fixed length it can not dynamically allocate (This is less legacy than it is bad design)<p>This is often a good way to structure string functions in C. The fact that TFA repeated the constant 40 instead of using sizeof() immediately signals that they are unfamiliar with the idioms. A "you problem".<p>Doing heap allocation where it is not required could be a problem for some use cases.<p>> localtime() needs the pointer to a time_t value even though it does not change it because of register size concerns on PDP-11’s<p>Also minor and cosmetic.<p>> sleep() cannot sleep for sub-second amounts of time, usleep() is deprecated and it’s alternative nanosleep() requires you to define variables<p>sleep(3) is not really a "time function" in the sense of the others mentioned, it is a thread scheduler function. As such it kind of exists in a different universe. This is also shown by the fact that it's part of POSIX and not the C standard, like time(2) is.
95% of the supposed issues with C could be solved by a new standard library, integrating the debugger into the compiler as the default build/run environment (with auto address sanitisation, frame protection, etc. etc.), and a default strict mode error checking.<p>It would then be actually really hard to successfully run a C program (in the debugger) with any problems. Under these conditions it'd be easy to imagine most C programs running with fewer bugs (, leaks, etc.) than Rust programs.
I wasn’t aware that on non-x86 platforms long double is often implemented with quadruple precision. I had assumed it was an x87-specific hack. On ARM64 windows/macos long double is apparently 64-bits which could be a problem.<p>Personally something about that solution is unsatisfying. Feels like it’d be slow, even though that wouldn’t matter 95% of the time. I’d rather have 128-bit integer of nanoseconds.<p><a href="https://en.wikipedia.org/wiki/Long_double" rel="nofollow">https://en.wikipedia.org/wiki/Long_double</a>
In an article like this I would have liked to see some mention of TAI (<a href="https://en.wikipedia.org/wiki/International_Atomic_Time" rel="nofollow">https://en.wikipedia.org/wiki/International_Atomic_Time</a>) as one of the alternatives to UTC. Unfortunately there are several different universal times. Apparently there's also a "Galileo System Time", for example.
Among the many improvements, time is one area where C++ has become better than old school C cruft. In c++20/std::chrono, the Lua like code is just this -<p><pre><code> auto now = system_clock::now();
zoned_time local_time{current_zone(), now};
std::cout << std::format("{:%a %b %d %T}\n", local_time);</code></pre>
Updated link to referenced work <i>Time, Clock, and Calendar Programming in C:</i><p><a href="http://www.catb.org/esr/time-programming/" rel="nofollow">http://www.catb.org/esr/time-programming/</a>
I don't think having strftime return a malloc'd pointer is a good idea. The string won't be large at all and can easily fit onto the stack (just like it was done in the example code). If I want to use a custom allocator to store the string, I can. If I want to malloc the string I can.
Time parsing and formatting is prone to extended bikeshedding. I once raised the issue that Python had five parsers for ISO 8601 date formats, and they were all broken in some way. It took a decade to resolve that. By then I'd moved on to Rust.
> keep in mind that Integers support One percision, and there’s a trade off between resolution and the bounds of your epoch, Floating point values support all percisions, there is no such trade off.<p>Yeah, except with integers you get guaranteed precision across all of your data range while with floating point, it is ridiculously easy to accidentally lose precision without noticing it when e.g. shifting time deltas from the past into the future.<p>Not to mention that using floating-point number of seconds since epoch means that the times around the epoch are always given better precision than the timestamp around the current time which is really not what you want, and the situation only worsens with time.
Indeed, when I wrote a C utility and just wanted to output its start, finish and run time, I spent MORE TIME THAN IT TOOK TO WRITE THE WHOLE PROGRAM to figure out how the whole date/time garbage works! This was a painful, maddening experience. As if this entire API was designed to drive you mad.<p>But I still love C anyway.
> strftime() has to write to a string of a fixed length it can not dynamically allocate (This is less legacy than it is bad design)<p>There's good reason for this. I disagree that it's a bad design.<p>strftime can legitimately produce zero-length strings, in a non-error state. You do not want an allocation on the heap, that is empty.<p>You'd end up with more error states to track, and more confusion around whether the function had succeeded. (Especially when using %c).
> Out of all the components of C, its time API is probably the one most plagued with legacy cruft.<p>First off, no, locales and wide characters exist. This statement is just laughable.<p>But even as to time: that seems really unfair. This whole area is a footgun and has been the source of bad implementation after bad implementation, in basically every environment. But among those: The "struct tm" interface is notable for being:<p>1. Very early, arriving in C89 and the first drafts of POSIX, with working implementations back into the mid 80's.<p>2. Complete and correct, able to give correct calendar information for arbitrary named time zones in an extensible and maintainable way. LOTS of other attempts got stuff like this wrong.<p>3. Relatively easy to use, with a straightfoward struct and a linear epoch value, with conversion functions in each direction, and only a few footguns (the mix of 0- and 1-indexing was unfortunate). There are even a few quality of life additions like support for "short" month names, etc...<p>Really, these routines <i>remain useful even today</i>, especially since their use is guaranteed to integrate with your distro's time zone database which must be constantly updated to track legal changes.<p>There's stuff to complain about, but... no, I think the premise of the article is dead wrong.
The author could use a lesson in visual design.<p><a href="https://www.contrastrebellion.com" rel="nofollow">https://www.contrastrebellion.com</a>