I work with a proprietary programming language that internally models a <i>date_time</i> as a UNIX epoch offset. The <i>to_string</i> method of a <i>date_time</i> results in a nice human readable string in the local timezone, but crucially not including the TZ itself. There were also accessors for the HH:MM:SS parts of the <i>date_time</i>.<p>At some point, the problem must've come up "if it's 5pm here in SFO, what time is it in MIA?", or some variation on that theme.<p>Someone decided that the best way to answer this was to write a function that took a <i>date_time</i>, then altered it to apply an offset between timezones. e.g. <i>time_local_to_tz</i>.<p>So you can could take a <i>date_time</i> in SFO, do <i>time_local_to_tz</i> (supplying Miami's TZ) and get back a <i>date_time</i> value that would <i>to_string</i> in SFO to show "the time in MIA". These functions made their way into a standard library and then to a lot of code.<p>The only problem is that the assumptions are literally all wrong. Adding the offset changes the actual point in time being addressed, which can change the timezone in effect in the current location, which results in the result skewing. This was compounded by some developers assuming that maybe they should convert their times to UTC before persisting them.<p>Of course, the usage of these functions is now embedded in a bunch of code no-one dares to touch, because it is full of hacks to "make it work" and quite possibly there is other code somewhere else (separated by a network connection, or a file, or persistence into a database) that is predicated on undoing those same set of hacks.