> bundling an 80MB+ SQLite file to our codebase slowed down the entire Github repository and hindered us from considering more robust hosting platforms<p>This seems like a decent reason to stop committing the database to GitHub, but not a reason to move off SQLite.<p>If you have a small, read-only workload, SQLite is very hard to beat. You can embed it ~everywhere without any network latency.<p>I'm not sure why they wouldn't just switch to uploading it to S3. Heck, if you really want a vendor involved that's basically what <a href="https://turso.tech/" rel="nofollow">https://turso.tech/</a> has productized.
"Overall, this migration proved to be a massive success" but their metrics shows this migration resulted in, on average, slower response times. Wouldn't this suggest the migration was not successful. Postgres can be insanely fast, and given the volume of data this post suggests, it baffles me that the performance is so bad.
What a bizarre article… performance ended up being worse, how can that be considered a resounding success? Doesn’t seem like it’s a slam dunk case for using neon
Lots of comments about the drop in performance. No matter how well you tune network PostgreSQL it's going to have trouble coming close to the performance you can get from a read-only 80MB SQLite file.<p>They didn't make this change for performance reasons.
If most queries take ~ 1s on a relatively small 80MB dataset, then it sounds to me like they really needed to run EXPLAIN on their most complex queries and then tune their indexes to match.<p>They could have probably stayed with SQLite, in fact, because most likely it's a serious indexing problem, and then found a better way to distribute the 80MB file rather than committing it to Github. (Although there are worse ideas, esp with LFS)
I don't see any mention of the data size or volume of transactions? Also, your API response times were worse after you finished and optimized, and that's a success? or you're comparing historical SQLite vs new PostgreSQL? I kinda see this more as a rewrite than a database migration (which I'm going through now from SQL Server to PostgreSQL)
> 79.15% of our pricing operations averaged 1 second or less response time<p>These numbers are thrown out there like they're supposed to be impressive. They must be doing some really complex stuff to justify that. For a web server to have a p79 of 1 second is generally terrible.<p>> 79.01% to average 2 seconds or less<p>And after the migration it gets FAR worse.<p>I get that it's a finance product, but from what they wrote it doesn't seem like a large dataset. How is this the best performance they're getting?<p>Also a migration where your p79 (p-anything) doubled is a gigantic failure in my books.<p>I guess latency really mustn't be critical to their product
> Ensure database is in same region as application server<p>People tend to forget that using The Cloud (tm) still means that there's copper between a database server and an application server and physics still exist.
The latency before/after histograms unfortunately use different scales, but it appears that eg the under-200ms bucket is only a few percentage points smaller after the change, maybe 38 before and 33 after.<p>What I'm curious about is whether Neon can run pg locally on the app server. The company's SaaS model doesn't seem to support that, but it looks technically doable, particularly with a read-only workload.
If starting off with Elixir and Postgres from the get-go, all this could have been avoided - including the async pains. Said another way: don’t write you backend in JS and just use Postgres.
Where is the cto or senior technical leader in this? The team seems to be trying hard and keeping the lights on, but honestly there are several red flags here. I’m especially skeptical about the painful and complex manual process that is now 1-click. I want to hope they succeed, but this sounds awfully naive.
PSA: If you're running a business and some databases store vital customer or financial data, consider EnterpriseDB (EDB). It funds Postgres and can be used almost like Oracle DBMS. And definitely send encrypted differential backups to Tarsnap for really important data.
Shepherd raised $13.5M earlier this year. Imagine being an investor in this company and seeing this post. They seriously wrote a lengthy post publicizing their struggles with an 80MB database and running some queries. The entire technical team at this company needs to be jettisoned.<p>These are the sort of technical struggles a high school student learning programming encounters. Not a well-funded series A startup. This is absolutely bonkers.
I wonder if DuckDB with parquet storage on S3 (or equivalent) would have been a nice drop-in replacement. Plus DuckDB probably would have done quite well in the ETL pipeline.
> Furthermore, bundling an 80MB+ SQLite file to our codebase slowed down the entire Github repository and hindered us from considering more robust hosting platforms.<p>It's... an 80MB database. It couldn't be smaller. There are local apps that have DBs bigger than that. There is no scale issue here.<p>And... it's committed to GitHub instead of just living somewhere. And they switched to Neon.<p>To me, this screams "we don't know backend and we refuse to learn".<p>To their credit, I will say this: They clearly were in a situation like: "we have no backend, we have nowhere to store a DB, but we need to store this data, what do we do?" and someone came up with "store it in git and that way it's deployed and available to the app". That's... clever. Even if terrible.