As a non-dev intruder I have to say that I love SQLite. I do a lot of data-analysis and it makes everything easy, from fast SQL Wizardry to sharing the DB just coping a file! Just how amazing is that?!<p>It must sound naive to some of you, but the first time stumbled upn sqlite I was so excited!
For our B2B application, we've been using SQLite as the exclusive means for reading and writing important bytes to/from disk for over 3 years now.<p>We still have not encountered a scenario that has caused us to consider switching to a different solution. Every discussion that has come up regarding high availability or horizontal scaling ended at "build a business-level abstraction for coordination between nodes, with each node owning an independent SQLite datastore". We have yet to go down this path, but we have a really good picture of how it will work for our application now.<p>For the single-node-only case, there is literally zero reason to use anything but SQLite if you have full autonomy over your data and do not have near term plans to move to a massive netflix-scale architecture. Performance is absolutely not an argument, as properly implemented SQLite will make localhost calls to Postgres, SQL Server, Oracle, et. al. look like a joke. You cannot get much faster than an in-process database engine without losing certain durability guarantees (and you can even turn these off with SQLite if you dare to go faster).
While reading the documentation for iff(), I noticed the command line function edit(), which is pretty cool.<p><pre><code> UPDATE docs SET body=edit(body) WHERE name='report-15';
UPDATE pics SET img=edit(img,'gimp') WHERE id='pic-1542';</code></pre>
While I love SQLite as much as the next person (and the performance and reliability is really quite remarkable), I can’t understand all the effusive praise when you can’t do basic things like dropping columns. How do people get around this? Do you just leave columns in forever? Or go through the dance of recreating tables every time you need to drop a column?
One of the great things one can learn from SQLite is the degree to which they unit (and integration) test their source code. It's honestly the best unit test document I have read in my career to date: <a href="https://www.sqlite.org/testing.html" rel="nofollow">https://www.sqlite.org/testing.html</a>.
SQLite is great but its decision in not having a standard datetime/timestamp datatype -- a standard in all other relational databases -- has always struck me as a surprising omission, but in retrospect I kind of understand why. Datetimes are undeniably difficult.<p>So sqlite leaves the datetime storage decision to the user: either TEXT, REAL or INTEGER [1]. This means certain datetime optimizations are not available, depending on what the user chooses. If one needs to ETL data with datetimes, a priori knowledge of the datetime type a file is encoded in is needed.<p>In that sense, sqlite really is a "file-format with a query language" rather than a "small database".<p>[1] <a href="https://stackoverflow.com/questions/17227110/how-do-datetime-values-work-in-sqlite" rel="nofollow">https://stackoverflow.com/questions/17227110/how-do-datetime...</a>
>Increase the default upper bound on the number of parameters from 999 to 32766.<p>I don't want to know the use case for this.<p>Keep rocking on, SQLite. It's the first tool I reach for when prototyping anything that needs a DB.
Thanks so much for SQLite. Amazing and stable database. Yazz Pilot (<a href="https://github.com/zubairq/pilot" rel="nofollow">https://github.com/zubairq/pilot</a>) is built on it
Is it reasonable to assume that in most current deployments of PostgreSQL or MySQL, SQLite would be at least an equally good choice?<p>I was recently choosing a database for a medium-size website and SQLite seemed like an obvious choice. One thing I was worried about was that the database locks for each write - but this is apparently not true anymore with write-ahead log.
I’ve been using SQLite on GCP for a few small projects and it seems to work well.<p>I use docker volumes to write to disk. I pass the disk directory to my process via a CLI arg.<p>When running on a VM these disk writes are replicated between zones (this is default for regional GCP disks). So you get zero config high availability (if you can tolerate down time during a reboot).
Are there resources for good practices on database formatting? I feel that what I make 'works', but I'd be curious on what experienced databases look like.<p>For example I have an app that you upload files through. Files can be local to the server or on s3 and have metadata. I end up making a new table for the API points. Like a table for listing files/directories. A table for local files and a table for s3 files. Then a table for the metadata, and a table for the kind of file it is, etc. It works, but it feels like a heavy hammer.
A few nice little conveniences like IFF(). I like reading SQLite released because they seem good at avoiding adding cruft.
(The refusal to implement JSONB comes to mind.) Now if only I could get my shared web host to upgrade to a recent version...
Recommendations for learning SQL with SQLite? I've recently started doing the Khan Academy videos, and am liking them, but I'd like more practice problems and explanatory text.
One possible disadvantage of SQLite is that it only allows one writer at a time (but writes don't block readers with write-ahead log enabled). I'm really curious about whether Postgres performs better at concurrent writing, couldn't find any benchmarks. In theory, disk writes are always sequential, so I'm skeptical Postgres would do substantially better.
I have running in production a SQLite powered service for the free Geonames gazetteer. It's a read only service so it fits perfectly and providing really good performance.
I also use it to work with data coming in CSV format.
What a great piece of software!
Where can you use sqlite?<p>Embedded: Yes<p>Raspberry Pi: Yes<p>Mobile Apps : Yes<p>Desktop Apps: Yes<p>Microservices: Yes<p>Big Monolith : Yes<p>Browsers. : No