This looks rather neat!<p>I like to think that I know enough Clojure to decipher that the database and metadata is locked during compaction. Is this true? See <a href="http://github.com/mmcgrana/fleetdb/blob/master/src/clj/fleetdb/embedded.clj#L99" rel="nofollow">http://github.com/mmcgrana/fleetdb/blob/master/src/clj/fleet...</a><p>I wrote something called LogStore based on the general notion of log-structured data (then learned about the work of Ousterhout et al. in the 90s). For what it's worth, I avoided the need to lock the metadata and the database during compaction, allowing the log to grow during compaction.<p>Instead of working through the locked metadata (offset per entry) and rewriting the entries to a new file, the compactor works from the end of the log file to the start of the log file. Each record descriptor is actually written after the record data to help the compactor easily skip over older versions of records it has processed already during the compaction iteration.<p>Once the compactor reaches the start of the file, it checks to see if the file grew (more appends, new data) since the beginning of the compaction, and starts over from the new EOF but stopping at the previous EOF. This repeats until the compactor fully catches up.<p>Then the database is temporarily locked, the new metadata is swapped in, and the existing log file is overwritten with the compacted log file.