I evaluated D1 for a project a few months ago, and found that global performance was pretty terrible. I don't know what exactly the issue with their architecture is, but if you look at the time-to-first-byte numbers here[1], you can see that even for the D1 demo database the numbers outside Europe are abysmal, and even within Europe having a TTFB of > 200ms isn't great.<p>This post helps understand some basic DB pitfalls for frontend developers, but I wouldn't use D1 regardless. If you can figure out how to use D1 as a frontend dev, you can use a hosted Postgres solution and get much more power and performance.<p>[1] <a href="https://speedvitals.com/ttfb-test?url=https://northwind.d1sql.com/products?_data=routes/products" rel="nofollow">https://speedvitals.com/ttfb-test?url=https://northwind.d1sq...</a>
The optimizations listed in the article are common fallbacks of all serverless databases. Unless you are super diligent with writing queries to your database, it's going to be costly. The only real application I found so far are small projects where less than 5 tables are needed and no JOINs are required. That means projects like: page visitor counts, mailing lists, website pageview tracking are a perfect fit for serverless databases.<p>I used Mongo serverless few years ago when it was first released, I didn't know how the pricing works so I wasn't aware how much these full table scans will cost me even on a small collection with 100k records...<p>For example in logdy.dev[1] I'm using D1 to collect all of the things listed above and it works like a charm with Cloudflare Workers.
Just last week I published a post on how to export D1 and analyze it with Meatabase[2], for the next post I think I'm going to describe the whole stack.<p>[1]<a href="https://logdy.dev/" rel="nofollow">https://logdy.dev/</a>
[2]<a href="https://logdy.dev/blog/post/how-to-connect-cloudflare-d1-sqlite-database-with-metabase-for-powerful-analytics" rel="nofollow">https://logdy.dev/blog/post/how-to-connect-cloudflare-d1-sql...</a>
Another fun limitation is that a transaction cannot span multiple D1 requests, so you can't select from the database, execute application logic, and then write to the database in an atomic way. At most, you can combine multiple statements into a single batch request that is executed atomically.<p>When I needed to ensure atomicity in such a multi-part "transaction", I ended up making a batch request, where the first statement in the batch checks a precondition and forces a JSON parsing error if the precondition is not met, aborting the rest of the batch statements.<p><pre><code> SELECT
IIF(<precondition>, 1, json_extract("inconsistent", "$")) AS consistent
FROM ...
</code></pre>
I was lucky here. For anything more complex, one would probably need to create tables to store temporary values, and translate a lot of application logic into SQL statements to achieve atomicity.