Since the title was getting long we couldn't quite fit in that it's also open source, and that it's written in Zig. But it is, and it is. :)<p><a href="https://github.com/tigerbeetledb/tigerbeetle">https://github.com/tigerbeetledb/tigerbeetle</a><p>Also, if there are any Africa-based devs here (but of course, you're welcome to come from wherever you are) we're running a systems programming conference with Derek Collison of NATS, Jamie Brandon of TigerBeetle and HYTRADBOI, Andrew Kelley of Zig, and many other great folks.<p>Next week, Feb 9th and 10th in Cape Town. Maybe we'll see you there!<p><a href="https://systemsdistributed.com" rel="nofollow">https://systemsdistributed.com</a>
I've wrangled with ledgers on and off for 10+ years now. I can some value in pushing some of the application level concerns to lower down the stack. One could call these as Domain Specific Databases. It'll be interesting to see how this evolves.<p>Adding a few thoughts off the top of my head.<p>From my experience I can say that money transfers are rarely, if ever, done atomically between two end user accounts. i.e., <Account A debit, Account B> being one operation. Instead the money first moves from Account A to a pooled account, some business checks/operations take place, and then from a pooled account (could be same or different) to account B. The reason to do so is to run a host of business checks which could take quite some time, in some cases even a day or more. Which is why the two phase account transfer semantic of "Reserve", "Settle" scales so well. One can build all kinds of higher order business flows on top of them. Subscriptions, scheduled payments, and so on.<p>Account transfers are subject to a bunch of velocity limits such as daily/weekly/monthly count and volume of transactions. It depends on a bunch of factors such as user's KYC levels, fraud rules, customers' age in the system etc., An account starts off at a lower limit and those limits are gradually increased as the account ages in the system. This was a big pain to pull off in scalable fashion. The running counter of limits ends up being bottle neck. For example, a popular merchant account doing a few million transactions a day. Subjecting that account to velocity limit checks million times will mean that merchant account will become a hotspot.<p>Maybe I should do a blog post sometime, just to give a structure to my brain dump :-P.
Joran Greef (TigerBeetle's CEO) had a fascinating conversation with Richard Feldman on his Software Unscripted podcast. Highly recommend giving it a listen!<p><a href="https://twitter.com/sw_unscripted/status/1584695054563954689" rel="nofollow">https://twitter.com/sw_unscripted/status/1584695054563954689</a>
They have done great contribution for async I/O in the Zig community:<p>- io_uring support on the std lib<p><a href="https://github.com/ziglang/zig/pull/6356">https://github.com/ziglang/zig/pull/6356</a><p>- a cross-platform io_uring based event loop<p><a href="https://github.com/tigerbeetledb/tigerbeetle/tree/main/src/io">https://github.com/tigerbeetledb/tigerbeetle/tree/main/src/i...</a><p><a href="https://tigerbeetle.com/blog/a-friendly-abstraction-over-iouring-and-kqueue/" rel="nofollow">https://tigerbeetle.com/blog/a-friendly-abstraction-over-iou...</a><p>Thanks team
i feel like the branding of "financial accounting database" is underselling the broad potential of Tigerbeetle. Finance feels "niche" to developers but really think about how often we use "transactions" in everyday database concepts. Double entry accounting is the obvious next step in terms of auditability of transactions, and it's thousands of years old. Now we have it as a database.
As someone who worked on a financial system where we did basically what you described in the blog post (create in house API backed by some database) this would’ve been amazing.
I've been looking through the docs, and I can't find how you're intending on supporting the metadata that would go along with a transaction. Say I want to post a journal, where would things like department, customer/vendor (entity), cost centre etc exist? Or header information? Or would they have to be linked externally? Or is big ERP software not the target market? If not, what is?<p>Just flipped back to the docs again, think I've found it: 'Set user_data to a "foreign key" — that is, an identifier of a corresponding object within another database.' -- This is all well and good, but if you're having to write to this other database at the same time in order to store said other data, doesn't it make your ledger a bit pointless? I'm just struggling to see the use cases. Can anyone help me "get it" ? (For the record, I work with ERP/accounting systems in my day job)
I was heavily influenced by the LMAX architecture, when I was CTO'ing a transactions based system. We had postgres in the backend and heavily used redis's streams for event sourcing, but we were only targeting in the region of 1000s of tps, and planned to shard heavily by client for growth. No longer doing that, but very interested to see other players in the field succeed.
I'm not convinced by the seperate DB/API.<p>Advantages for using Postgres ( assuming a double entry schema[1] ) and you're using Postgres for your main app db;<p><pre><code> - You can do financial transactions inside business db transactions, and roll back both atomically
- Adding up numbers is one of the things computers are best at, and Postgres can easily handle a huge amount of financial transactions
- Re-use PaaS Postgres hosting/scaling/clustering/backups
- Easier integration with the rest of your app with foreign keys to relevant records relating to the financial transaction
- Easy integration with BI tools given Postgres is well connectable
</code></pre>
[1]
Roughy `account(id, cached_balance)`, `transaction_lines(src_account, dst_account, amount)`<p>This gem does literally billions of dollars worth of financial accounting for various companies at scale: <a href="https://github.com/envato/double_entry">https://github.com/envato/double_entry</a><p>It's dated, the API is a bit messy and needs work, as it was initially written 10+ years ago, but for a web based app I would choose a v2 of it over a non-postgres ( assuming you are using Postgres for your app ) solution.
The particular technical challenges tiger beetle is taking on (Protocl-aware recovery, io_uring, etc) are interesting, so my question is... why build a double entry system when you could build a new... database?
Does TigerBeetle have a formal proof of serializability or has it been verified by the Jepsen tests? It's mentioned in the blog post and I'm curious how it fares in that department.<p>Cool project!
Curious, how does this compare/contrast to Formance? (I haven't used either, but looking at Formance right now for tracking "IOU points" between multiple parties
This will be an uphill battle against established solutions like MS Dynamics.<p>That's man-millennia of painfully evolved application code, catering for tax legislation of many countries and The Way Accountants Want It, and database performance/serializability is a fourth-order concern.