> And why 32 entries? I ran this benchmark with a bunch of different bucket sizes and 32 worked well. I have no idea why that worked out to be the best.<p>If you were using 2-byte ints, this is likely because cache lines are 64 bytes, so 32 entries would be exactly one cache line, letting each cache line hold an entire bucket, thus reducing those expensive main memory transfers.
What are some real world apps using CRDTs that have really good experiences?<p>IIRC Notion was supposed to be one of them but realistically taking notes with two people in Notion is almost unusable compared to Google Docs.
CRDTs are powerful, but it's unfortunate that they leave behind a trail of historical operations (or elements), both in their ops- or state-based variants. Even with compression, it's still a downside that makes me concerned about adopting them.<p>Even so, the discussion surrounding them made me excited by the possibility of implementing conflict-free (or fine-grained conflict resolution) algorithms over file-based storage providers (Dropbox, Syncthing, etc.).
This is one of those rare articles which, although much of the material is over my head, I couldn't stop reading because it's written so well.
Quoting the current github Readme [0]:
>And since that blog post came out, performance has increased another 10-80x (!).<p>[0] <a href="https://github.com/josephg/diamond-types">https://github.com/josephg/diamond-types</a>
Can someone explain to me please why CRDTs are slow?<p>This article suggests the future to me: <a href="https://joelgustafson.com/posts/2023-05-04/merklizing-the-key-value-store-for-fun-and-profit" rel="nofollow">https://joelgustafson.com/posts/2023-05-04/merklizing-the-ke...</a><p>Take a look at this and compare it to Y.js or automerge: <a href="https://github.com/canvasxyz/okra-js">https://github.com/canvasxyz/okra-js</a>
> Why is WASM 4x slower than native execution?<p>I thought it was because every string operation had to be copied into WASM memory and then back into JS when the result was computed. Am I wrong? Am I misunderstanding the context? Genuinely curious!
Seeing the hierarchical structure used I wonder if they tried using nested set instead. No idea if a possible gain in read operation would offset the losses in insertions.
Yeah, new rule: I don't believe anything in a published scientific paper until it has been independently verified for the third time. I don't even want to <i>hear</i> about it, before then, unless I read the journal the original (or second) paper was published in. What I'd really like, and would subscribe to even as a lay person, is the JOURNAL OF STUDIES WHOSE FINDINGS HAVE BEEN SUCCESSFULLY REPRODUCED FOR THE THIRD TIME. I'd pay for a subscription to that.