I don't really work with SQLite too much (I mainly work with mySQL and in-memory-structures, like hashes), but here are a few observations that kept me from using it:<p>1. It uses a lot of IO unnecessarily. (Actually, necessarily, due to their motto, but 99% odf developers aren't aware that this is not necessary) SQLite-based applications often make my hard-drive sqeak like it's on a last spin - SQLite does A LOT of hdd activity (Chrome, Firefox, Adwords editor are notorious for that). It's SQLites motto - 100% consistency first! journal everything!.. but... that's my HDD and I don't want it to be dead because SQLite thinks it's great idea to write every byte it receives to insure me against 0.01% chance of data loss.<p>2. Head-to-head runs of mySQL vs SQLite in real-world applications - well, SQLite loses badly.. and I mean badly. I use a lot of hash-style databases (i.e. id-value storage, 2 columns only) in some of my tests SQLite lost with 10 times worse performance against mySQL (again, if they didn't try to record everything, like mySQL does with MyISAM - they would've got the chance).<p>That being said - in-memory SQLite tables are very performant! Very! But... almost nobody uses that. As a result - Chrome/Firefox downloads phising update and my HDD lights like a christmas tree (download Filemon from SysInternals and watch how much Chrome or other SQLite-based engine uses your HDD for simple data that MUST be in-memory... or at least used by in-memory cache, like in MySQL... so your users' HDDs will eventually die because of this). I know it's none of developers troubles to think about the fact that their software unnecessarily uses much IO and leads to early hard drive replacement, but still, that's a big reason why I avoid using SQLite.<p>I mean, there are a lot of in-memory structures that can (and should) be used, but once the developer sees how convinient SQLite is, he jumps right on it. I mean.. 99% of times SQLite should be used like this - load data from disk to :memory: table (i.e. replicate it to memory), work with :memory: and eventually dump it to drive (like once in a couple of minutes) and dump on "Save" command. But instead every popular program uses disk-based tables. Why? Because it's a lazy way. Who wants to add in-memory pre-caching when you can just open a table from disk in less commands?<p>It's sad to see the current state of programming. Coming from an area where I had to write programs that would fit into 64KILObytes of memory and 3.5MHZ (that's 1000 times slower than modern single-core processor) - (ZX Spectrum)... and we've had whole huge games in that, working in real-time, with color graphics and almost unlimited galaxies (Elite)... it's just sad to see how the available resources are used these days. People had to use a lot of tricks to make things work with acceptable speeds... And now? I mean it's 3GHZ and gigabytes of data, yet I have to wait 10 seconds for Word (or Firefox) to start up?<p>My main points are: 1) SQLite is convinient; 2) Disk-based SQLite (which is used by 99% or more pieces of software) is nowhere near the possible performance of in-memory; 3) disk-based SQLite is putting a heck of a load on HDD (fire up FileMon during your Chrome session and be amazed); 4) SQLite is nowhere near one-fits-almost-all solution, like mp3 is; 5) even so it will gain more popularity because it's convenient for developers.<p>I have MySQL table with 72billion denormalized items (72million rows) and it's in realtime application... and it performs! I can't even imagine what would SQLite be like with that kind of data.<p>Still, please note that I'm not saying SQLite is bad, I'm saying the default route almost all devleopers take with it has some serious issues and yet nobody seems to care.