We've been using SQLite in production as our exclusive means for getting bytes to/from disk for going on 6 years now. To this day, not one production incident can be attributed to our choice of database or how we use it.<p>We aren't using SQLite exactly as intended either. We have databases in the 100-1000 gigabyte range that are concurrently utilized by potentially hundreds or thousands of simultaneous users. Performance is hardly a concern when you have reasonable hardware (NVMe/SSD) and utilize appropriate configuration (PRAGMA journal_mode=WAL).<p>In our testing, our usage of SQLite vastly outperformed an identical schema on top of SQL Server. It is my understanding that something about not having to take a network hop and being able to directly invoke the database methods makes a huge difference. Are you able to execute queries and reliably receive results within microseconds with your current database setup?<p>Sure, there is no way we are going to be able to distribute/cluster our product by way of our database provider alone, but this is a constraint we decided was worth it, especially considering all of the other reduction in complexity you get with single machine business systems. I am aware of things like DQLite/RQLite/et.al., but we simply don't have a business case that demands that level of resilience (and complexity) yet.<p>Some other tricks we employ - We do not use 1 gigantic SQLite database for the entire product. It's more like a collection of microservices that live inside 1 executable with each owning an independent SQLite database copy. So, we would have databases like Users.db, UserSessions.db, Settings.db, etc. We don't have any use cases that would require us to write some complex reporting query across multiple databases.
I believe SQLite is about to explode in usage into areas it’s not been used before.<p>SQL.js[0] and the incredible “Absurd SQL”[1] are making it possible to build PWAs and hybrid mobile apps with a local SQL db. Absurd SQL uses IndexedDB as a block store fs for SQLite so you don’t have to load the whole db into memory and get atomic writes.<p>Also I recently discovered the Session Extension[2] which would potentially enable offline distributed updates with eventual consistency!<p>I can imagine building a SAAS app where each customer has a “workspace” each as a single SQLite db, and a hybrid/PWA app which either uses a local copy of the SQLite db synced with the session extension or uses a serveless backend (like CloudFlare workers) where a lightweight function performs the db operations. I haven’t yet found a nice way to run SQLite on CloudFlare workers, it need some sort of block storage, but it can’t be far off.<p>0: <a href="https://sql.js.org/" rel="nofollow">https://sql.js.org/</a><p>1: <a href="https://github.com/jlongster/absurd-sql" rel="nofollow">https://github.com/jlongster/absurd-sql</a><p>2: <a href="https://www.sqlite.org/sessionintro.html" rel="nofollow">https://www.sqlite.org/sessionintro.html</a>
I've always thought it interesting that there was a time when large(ish) websites were hosted using servers that would struggle to outperform a modern smart toaster or wristwatch, and yet modern web applications tend to demand a dramatic distributed architecture. I like the examples in this article showing what a single modern server can do when you're not scaling to Google's level.<p>As an aside, what about distributed derivatives of sqlite, like rqlite, as a response to the criticism that sqlite requires your database server to also be your web server. Could something like rqlite also provide a way for an sqlite database to grow into a distributed cluster at a later point?<p><a href="https://github.com/rqlite/rqlite" rel="nofollow">https://github.com/rqlite/rqlite</a>
There are some important things that SQLite does not do.<p>It is not client/server; a process must be able to fopen() the database file. NFS and SMB are options that can convey access to remote systems, but performance will not likely be good.<p>Only a single process can write to the database at any time; it does not support concurrent writers.<p>The backup tools do not support point-in-time recovery to a specific past time.<p>If your application can live with these limitations, then it does have some wonderful features.
I don't doubt the power of SQLite, but its difficult to see why its worth using over Postgres anyways.<p>This is what it takes to run a basic postgres database on my own PC (in a docker compose file):<p><pre><code> postgres:
image: postgres:12.7
container_name: postgres
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=<pw>
volumes:
- ./volumes/postgres/:/var/lib/postgresql/data/
</code></pre>
For someone who's completely allergic to SSH and linux, a managed Postgres service will take care of all that too.<p>SQLite seems simple in that its "just a file". But its not. You can't pretend a backup is just copying the file while a DB is operating and expect it to be consistent. You can't put the file on NFS and have multiple writers and expect it to work. You can't use complex datatypes or have the database catch simple type errors for you. Its "simple" in precisely the wrong way - it looks simple, but actually using it well is not simple. It doesn't truly reduce operational burden, it only hides it until you find that it matters.<p>Similarly postgres is not automatically complex simply because it _can_ scale. It really is a good technology that can be simple at small scale yet complex if you need it.
I use SQLite exclusively on a high performance crypto sniper project - <a href="https://bsctrader.app" rel="nofollow">https://bsctrader.app</a> and I could not be happier with it.<p>Performs much better then postgres in terms of query latency which is ultra important for the domain we operate in.<p>I take machine level backups every 2 hours, so in the event of an outage, just boot the disk image on a new vm and it's off.<p>I would never do this on my professional job due to the stigma, but for this side project, it has been incredible
Am I the only one who thinks SQLite is still too complicated for many programs? Maybe it's just the particular type of software I normally work on, which tends towards small, self-hosted networking services[0] that would often have a single user, or maybe federated with <100 users. These programs need a small amount of state for things like tokens, users accounts, and maybe a bit of domain-specific things. This can all live in memory, but needs to be persisted to disk on writes. I've reached for SQLite several times, and always come back to just keeping a struct of hashmaps[1] in memory and dumping JSON to disk. It's worked great for my needs.<p>Now obviously if I wanted to scale up, at some point you would have too many users to fit in memory. But do programs at that scale actually need to exist? Why can't everyone be on a federated server with state that fits in memory/JSON? I guess that's more of a philosophical question about big tech. But I think it's interesting that most of our tech stack choices are driven by projects designed to work at a scale most of us will never need, and maybe nobody needs.<p>As an aside, is there something like SQLite but closer to my use cases? So I guess like the nosql version of SQLite.<p>[0]: <a href="https://boringproxy.io/" rel="nofollow">https://boringproxy.io/</a><p>[1]: <a href="https://github.com/boringproxy/boringproxy/blob/master/database.go" rel="nofollow">https://github.com/boringproxy/boringproxy/blob/master/datab...</a>
I have tried adopting SQLite in my side projects. The problem I encountered is that using managed PostgreSQL/MySQL is still more convenient and more reliable than using SQLite on a bare metal VPS. I like to use Heroku or Digital Ocean App platform because I want to spend time creating and not managing the infrastructure (ci/cd, ssl certs, reverse proxy, db backup, scaling, container management and what not). I tried looking for a managed SQLite but could not find one. On an unrelated note I found using Redis a good lightweight alternative to the classical psql/MySQL. Although still multi-tier and more difficult to model data, it’s initially cheaper and easier to manage than its relational counterparts. Anyone has had similar setup/preference?
In the past I had a website with hundreds of gigs of data that needed updating regularly, but could be read-only from the web server perspective.<p>I used sqlite for that, and had a mysql server for the user data and stuff that needed to be written to. Performance was fantastic, users were happy, data updates were instantaneous ; copy the new data to the server then repoint a symlink.<p>Most of my work is modeling and simulation. Sqlite is almost always my output format ; one case per database is really natural and convenient, both for analysis, and run management.<p>Anyway. Sqlite is amazing.
I'm exactly at a point where I'm considering SQLite for its <i>single file db</i> advantage, but I'm struggling to find solutions for my use case.<p>I need to import some 30k JSONs of external monitor data from Lunar (<a href="https://lunar.fyi" rel="nofollow">https://lunar.fyi</a>) into a normalized form so that everyone can query it.<p>I'd love to get this into a single SQLite file that can be served and cached through CDN and local browser cache.<p>But is there something akin to Metabase that could be used to query the db file <i>after</i> it was downloaded?<p>I know I could have a Metabase server that could query the SQLite DB on my server, but I'd like the db and the queries to run locally for faster iteration and less load on my server.<p><i>Besides, I'm reluctant to run a public Metabase instance given the log4j vulnerabilities that keep coming.</i>
I've had great success using SQLite as both a desktop application file format and web server database. I'll mention just one thing I like about it in the desktop application realm: undo/redo is implemented entirely within SQLite using in-memory tables and triggers following this as a starting point: <a href="https://www.sqlite.org/undoredo.html" rel="nofollow">https://www.sqlite.org/undoredo.html</a><p>It's not perfect, but it fills the niche nicely.
I love to see that more projects are using SQLite as their main database.<p>One thing that I always wondered though: does anyone knows a big project/service that uses Golang and is backed by SQLite? This because SQLite would require CGO and CGO generally adds extra complexities and performance costs. I wonder how big Golang applications fare with this.
More info about WAL mode concurrency [0]<p>No reader-writer lock. Still only 1 concurrent writer, but write via append to WAL file is cheaper. Can adjust read vs write performance by syncing WAL file more or less often. Can also increase performance with lower durability by not syncing WAL file to disk as often<p><a href="https://www.sqlite.org/wal.html" rel="nofollow">https://www.sqlite.org/wal.html</a>
SQLite is great, but it's not a more simple drop in replacement for DB servers like HN often suggests it is.<p>My team at work has adopted it and generally likes it, but the biggest hurdle we've found is that it's not easy to inspect or fix data in production the way we would with postgres.
Nim forum uses SQLite as its db since 2012 and it fits perfectly the article’s use case. Code is available and it can be used to run a discourse inspired forum (although much less featured).<p><a href="https://github.com/nim-lang/nimforum" rel="nofollow">https://github.com/nim-lang/nimforum</a>
Shout out to litestream[0] for backups<p>[0] <a href="https://github.com/benbjohnson/litestream" rel="nofollow">https://github.com/benbjohnson/litestream</a>
My 2 cents on sqlite:<p><a href="https://corecursive.com/066-sqlite-with-richard-hipp/" rel="nofollow">https://corecursive.com/066-sqlite-with-richard-hipp/</a><p>An interview with one of the creators:Mr. Richard Hipp - for a better and deeper understanding what pitch they took and what industries they were in to. Their approach to overcome the db-world that they saw in front of them. See the obstacles and the solutions and why it came to be that underestimated 'sqlite' that powers a good chunk of all you mobile actions triggered by your apps - but just read that interview - i cannot reproduce the dramatic here in my own words (underestimated).
I've recently used SQLite for my personal project rigfoot.com<p>It's a "read only" and small website (at least for now), with just a bunch of daily visitors, a perfect use case for SQLite.<p>Funny thing is that in my case the database it's so small that it's pushed directly on the repo.<p>Especially for startups and little projects, SQLite is your best friend.
Don’t consider SQLite for cloud based webservers.<p>The scant upside of 10-50x supposed query latency increase is likely to be worth little. In the extreme this is low single-digit milliseconds, so will be dwarfed by network hops.<p>In return for the above, you’ve coupled your request handler and it’s state, so you won’t be able to treat services as ephemeral. Docker and Kubernetes, for instance, become difficult. You now require a heap of error-prone gymnastics to manage your service.<p>If the query latency really matters, use an in memory db such as Redis.<p>SQLite is great for embedded systems where you’re particularly hardware constrained and know your machine in advance. It could also be a reasonable option for small webservers running locally. For anything remote, or with the slightest ambition of scale, using SQLite is very likely a bad trade off.
This excellent article doesn't even mention rqlite, which will synchronize an arbitrary number of SQLite instances using the Raft protocol.<p>There must be some scaling limits to encounter using this combination, but wouldn't you love to have that problem?
I wrote a script to find all csv files in a directory and figure out the best way to load them into SQLite.<p>It gives me a handy way to run queries on data.<p>I tried to make it super smart too and really make educated guesses on what data types to use and even linking foreign keys.
<i>Postgres is 9.5× slower when running on the same machine as the one doing the query</i><p>I'm surprised by this, sure in-process is always going to be faster but still find it hard to believe that sqlite can be beat postgres in a single machine.
Just a note that there are significant features of SQLAlchemy that don’t work with SQLite such as ARRAY columns, UUID primary keys and certain types of foreign key constraints.
I've been doing some ETL exploration with opening a sqlite :memory: connection, ingesting small-to-medium data, and then doing<p>"VACUUM INTO somefile.sqlite;"<p>to dump the RAM copy to disk.<p>What a great tool.
i have so much love for SQLite. Consider me, getting my first internship at a startup. They have a bunch of contractors doing work for them as part of their service. The whitelabel application they got developed would export data in CSV. My job was to take that data and get some meaning from it. data included availability, locations, etc (imagine data about delivery drivers). I had no idea what to do but realized I could definitely parse this CSV through Python.<p>Once I had this data in Python I needed a way to analyze it. I never worked with Databases but decided to install a local copy of SQLite. The rest is history. I feel like I learned how to use databases in an organic way: by looking for a solution from raw data. A couple of queries later and python was exporting excel sheets with color coded boxes that indicated something based on the analysis I did.<p>Of course this could be done with any database application but the low weight nature of sqlite allowed me to prototype a solution so easily. We just backed up that native sqlite dump with the cloud and had an easy (super easy) solution to analyze raw data.
I love this approach. I've ran sqlite in production for a variety of products over the years and for the most part it's great. One "back up" solution I did was dump tables to text - commit the diff in git. Taking file copies while transactions are in process can lead to corrupted db's.
Before you start throwing your opinions into HN about sqlite, please read: <a href="https://corecursive.com/066-sqlite-with-richard-hipp/?utm_source=pocket_mylist" rel="nofollow">https://corecursive.com/066-sqlite-with-richard-hipp/?utm_so...</a>
Also, consider just using the filesystem with binary encoding. Serialize / deserialize directly from / to data structures. It is faster and simpler than any database you'll ever use, assuming you don't need the functionality a database provides.
Is anyone here running production workloads which perform read and write operations on a remote SQLite database?<p>Currently using Postgres and I'm open to switching but I haven't seen any libraries or implementations of SQLite being used as a client/server database.
I happily use SQLite via `sqflite`, a Flutter library, to store and retrieve data for an offline-first mobile app. This is my first time really using it, and I’m quite pleased with the experience and the familiar feel of using SQL.
SQLite database can be stored in git which seems like a great benefit. But I wonder would it also be possible to have different "branches" of the database and then merge them at some point?
Wow. So much tech - sqlite this time, and so many opinions to discuss with words / characters. Maybe i could create my own alphabet to have some peace of mind in the end? I don't think so - somebody would make a story of it - wait ... happened!<p>Read the creator of sqlite for more information on all running topics that make you go 'Uh' up to this point of time: <a href="https://corecursive.com/066-sqlite-with-richard-hipp/" rel="nofollow">https://corecursive.com/066-sqlite-with-richard-hipp/</a>