It’s funny how relevant this niche fact is for me. When I started my last job it was at 1.3 and I remember seeing it go through 1.4, 1.5 and 1.6 since I debugged a lot of data with timestamps. I remember commenting to my team about the 1.5 change and got some “so what” faces so I’m glad someone else looks at these changes as a sort of long term metronome like I did.
Such an excellent coincidence that it happens to be on my birthday! In fact to celebrate, I did set up the only livestream in existence on YouTube (afaik) to capture this: <a href="https://www.youtube.com/live/DN1SZ6X7Vfo">https://www.youtube.com/live/DN1SZ6X7Vfo</a>
Dang, I missed this by roughly an hour :(<p>I still remember when we were at 1.2 billion seconds. Time flies.<p>While we're still here: my favorite way to appreciate the scale of million and billion is with seconds: 1 million seconds is approximately 12 days, whereas as 1 billion seconds is approximately 31 years.
I watched in `deno repl` neatly sandboxed :)<p><pre><code> new Date().valueOf() / 1000
</code></pre>
I was counting down by thousands of seconds, rather than millions of milliseconds, which is why I divided instead of using the native js value.<p>Happy 1.7 gigaseconds!
As for what the future holds:<p><pre><code> $ date --date="@1800000000"
Fri Jan 15 03:00:00 AM EST 2027
$ date --date="@1900000000"
Sun Mar 17 01:46:40 PM EDT 2030
$ date --date="@2000000000"
Tue May 17 11:33:20 PM EDT 2033</code></pre>
I opened Node.js and did<p><pre><code> setInterval(() => console.log(Date.now()), 1);
</code></pre>
to watch the transition. Happy 1.7B seconds since Jan 1 1970!
I want to use this opportunity to flog one of my favorite topics: whether or not to store epoch time using variable-length numbers in protobuf.<p>TL;DR: never do this<p>If you are storing the epoch offset in seconds, you could store it as int32 or fixed32 (assuming the range is adequate for your application). But int32 will need 5 bytes, while the fixed32 field would only use 4. So you never save space and always spend time using int32.<p>Similarly, if you are storing the offset as nanoseconds, never use int64. Except for a few years on either side of the epoch, the offset is always optimal in a fixed64. int64 will tend to be 9 bytes. Fixed64 nanoseconds has adequate range for most applications.<p>You'll note that the "well-known" google.protobuf.Timestamp message commits both of these errors. It stores the seconds part as a varint, which will usually be at least 5 bytes when it could have been 4, and it stores the nanoseconds separately in an int32, even though this is more or less an RNG and is virtually guaranteed to need 5 bytes, if present. So nobody should use <i>that</i> protobuf.<p>Thus ends this episode of my irregular advice on how to represent the time.
The REAL big non-event that no one cares about is this one:<p><a href="https://www.epochconverter.com/countdown?q=2000000000" rel="nofollow noreferrer">https://www.epochconverter.com/countdown?q=2000000000</a>
I don't use Unix time. If someone gives you a Unix time timestamp x, it doesn't mean much unless you check it against a list of leap seconds. By default, your fancy unix time timestamp X doesn't point to a unique second in history, for dozens of times, it pointed to some pretty random 2 seconds intervals. TAI is the only sane choice if you do understand what you are doing.<p>Btw, if you already know that leap second is dead and wondering what happens next, well, they are going to implement leap minute, the good news is you are unlikely to see one in your life time. They are meeting next week to decide on this leap minute proposal.<p><a href="https://www.nytimes.com/2023/11/03/science/time-leap-second.html" rel="nofollow noreferrer">https://www.nytimes.com/2023/11/03/science/time-leap-second....</a>
Since we're less than a decade away from the doomsday point, I wonder if it would be easier to transition from signed to unsigned 32 bits, as it would buy everyone multiple decades to transition to something else.<p>Also, this first transition should be less disruptive than any other one, since the unsigned format is backwards compatible with the current signed one in its current usage (positive numbers).
OT but may be of interested to folks that find this kind of numerology fun.<p>Your 10_000 th day passes when you're 27.x years old (IIRC). I had a celebration with friends as it seemed more significant than any of the other milestones that are usually celebrated because you won't reach 100_000 and don't remember 1_000. Can recommend!
Dividing the Unix time by 100,000 produces a (currently) 5-digit number that increments every ~28 hours, and behaves pleasingly like a stardate.<p>(It doesn't align with any of the various stardate systems actually used in the different Star Trek series and films, but aesthetically it's similar enough to be fun.)
It should be 63835596800 (63.8 billion) because it was kind of self-centred to start counting from 1970 instead of year 1. It doesn't make sense to make memorizing a 4 digit number a prerequisite to becoming a programmer.
To all of you that wrote adhoc scripts to “see” this happen, what are you doing to preserve them so they work for 1.8 gigaseconds on January 15, 2027?<p>I lost my 1.6 gigasecond script because it was on a work laptop at a previous role.
Not to be a wet blanket, but I'm really surprised to see posts about this all over social media today. This isn't even a nice round number, and we hit a new decimal value every 5 years or so.
And .... ??<p>Not only that, it's almost 9AM.<p>Sorry, but I can't see what the big deal is supposed to be. Maybe I'm missing something.