The reason why ext4 and xfs both use nanosecond resolution is because in the kernel the high precision time keeping structure is the timespec structure (which originally was defined by POSIX). That uses tv_sec and tv_nsec. Certainly in 2008, when ext4 was declared "stable", the hardware of the time was nowhere near having the necessary resolution to give us nanosecond accuract. However, that's not really the point. We want to be able to store an arbitrary timespec value, encode it in the file system timestamp, and then decode it back to a bit-identical timespec value. So that's why it makes sense to use at least a nanosecond granularity.<p>Why not use a finer granularity? Because space in the on-disk inode structure is precious. We need 30 bits to encode nanoseconds. That leaves an extra two bits that can be added to 32 bit "time in seconds since the Unix epoch". For full backwards compatibility, where a "negative" tv_sec corresponds to times before 1970, that gets you to the 25th century. If we <i>really</i> cared, we could add an extra 500 years by stealing a bit somewhere from the inode (maybe an unused flag bit, perhaps --- but since there are 4 timestamps in an inode, you would need to steal 4 bits for each doubling of time range). However, there is no guarantee that ext4 or xfs will be used 400-500 years from now; and if it <i>is</i> being used, it seems likely that there will plenty of time to do another format bump; XFS has had 4 incompatible fomat bumps in the last 27 years. ext2/ext3/ext4 has been around for 28 years, and depending on how you count, there has been 2-4 major version bumps (we use finer-grained feature bits, so it's a bit hard to count). In the next 500 years, we'll probably have a few more. :-)