A couple of extra points I've found useful:<p>The first test of a class of tests is the hardest, but it's almost always worth adding. Second and subsequent tests are much easier, especially when using:<p>Parametrised tests let you test more things without copying boilerplate, but don't throw more variants <i>just</i> to get the count up. Having said that:<p>Exhaustive validation of constraints, when it's plausible. We have ~100k tests of our translations for one project, validating that every string/locale pair can be resolved. Three lines of code, two seconds of wall-clock time, and we know that everything works. If there are too many variants to run them all, then:<p>Prop tests, if you can get into them. Again, validate consistency and invariants.<p>And make sure that you're actually testing what you think you're testing by using mutation testing. It's great for having some confidence that tests actually catch failures.
The testing Pyramid makes sense. The problem for (perhaps) a lot of us will be that we're working on things where some or even all of the levels are missing and we have to try to bring them to a sensible state as fast as possible....but we have limited ability to do so. It's managing imperfection.<p>We're also possibly working with multiple teams on products that interact and it ends up being "nobody's job" to fill in the e2e layer for example.<p>Then when someone bites the bullet to get on with it....the whole thing isn't designed to be tested. e.g. how does anyone do testing with Auth0 as their auth mechanism? How do you even get a token to run an E2E type test? I have to screen scrape it which is awful.<p>Without those E2E tests - even just the test that you can login - the system can break and even when it's a test environment that makes the environment useless for other developers and gets in everyone's way. It becomes the victim's job to debug what change broke login and push the perpetrator to fix it. With automated e2e tests the deployment that broke something is easy to see and rollback before it does any damage.<p>I suppose I'm challenging the focus in a sense - I care about e2e more because some of those issues block teams from working. If you can't work because of some stupid e2e failure, you can't get fixes out for issues you found in the unit/integration tests.
I also enjoyed this in video format <a href="https://youtu.be/JLlIAWjvHxM?feature=shared&t=2049" rel="nofollow">https://youtu.be/JLlIAWjvHxM?feature=shared&t=2049</a><p>Always have been envious of that performance testing setup that is shown here
Thanks for the post...did not know about this.<p>Just went through the website and read through some documents. It is quite easy to read and understand.<p>One part I couldn't follow was security - other than items listed (file system, HTTPS, IP based filter), is it correct to say that if you know & have access to the URL API endpoints, any query can be run directly off the db with curl or such tools? How is this aspect managed in production? Sorry if this question is inappropriate or too dumb.
Now that there is testing like <a href="https://turso.tech/blog/introducing-limbo-a-complete-rewrite-of-sqlite-in-rust" rel="nofollow">https://turso.tech/blog/introducing-limbo-a-complete-rewrite...</a> going on, tbh the testing described in the linked piece doesn't seem so good.
1. Is this a one man project? Why? What if the author dies?<p>2. Why Go? Go is garbage collected, how is this even a good idea for a database engine in the first place?