For some reason I thought this article would be about how to estimate (free) disk space, with the problem being how to estimate (or know if possible) the available disk space using the least effort (or at least a fixed amount of effort).<p>I was expecting a deep dive into how different OSes handle storage and indexing, which file systems/drive types make it easier or harder, the tradeoffs between truly random sampling versus a sampling scheme that takes into account typical drive fragmentation patterns and speed of access, and was very excited.<p>I'm hoping this comment will nerd snipe someone who likes to write.<p>Edit: to comment on the article, 2**10=1024 is handy to know so then 2**20 is about a million and 2**30 is about a billion. That then helps you estimate common log2 values which allows you estimate sorting, searching, and tree type structures that have some logarithmic aspect to their time or space complexity.