> Despite the title, we do still need a small backend for writes. Every time a user modifies the data, they will need the POST to your server, which will modify the SQLite database. This leads us to the big question: how do we update the database? Cloud object stores like S3 do not allow partial file updates, so what happens when your database becomes 1 GB or larger?<p>> For me, the solution was lying inconspicuously in the SQLite source tree: an optional extension that allows you to multiplex a SQLite database across multiple files. Choose a chunk size, and your database will be automatically broken into multiple files as you add data to it. Now, you can just use a tool like rclone to copy the parts that have changed, instead of the entire 1+ GB database.<p>> This is not just theoretical. The technique above is how I built ANSIWAVE BBS. The entire BBS is hosted on S3, and every time someone writes a post, the SQLite database is updated there.<p>I strongly recommend authoring a tutorial on your discovery and submitting it to HN. I don't think most folks, myself included, realized that AnsiBBS was using SQLite range requests, or that you figured out how to update a multi-gigabyte SQL file in production.<p>You’re on the cusp of something big.