Depends, depends.. but just of logic:<p>All fs/drive access is managed by the OS. No DB systems have raw access to sectors or direct raw access to files.<p>Having a database file on the disc, offers a "cluster" of successive blocks on the hard drive (if it's not fragmented), resulting in relatively short moving distances of the drive head to seek the necessary sectors. There will still be the same sectors occupied, even after vast insert/write/del operations. Absolutely no change of DB file's position on hard drive. It's not a problem with SSDs, though.<p>So, the following apply:<p>client -> DB -> OS -> Filesystem<p>I think, you already can see the DB part is an extra layer. So, if one wouldn't have this, it would be "faster" in terms of execution time. Always.<p>If it's slower, then you use the not-optimal settings for your use case/filesystem.<p>My father did this once. He took H2 and made it even more faster :) incredible fast on Windows in direct comparison of H2/h2-modificated with same data.<p>So having a DBMS is convenient and made in decisions to serve certain domains and their problems. Having it is convenient, but that doesn't mean it's the most optimized way of doing it.