I've seen them mentioned in a few conversations. From what I understand, they effectively allow to record and replay browsing sessions. However, if I understand correctly, it isn't a video recording but rather some sort of serialized version of the UI.<p>How does that serialization/deserialization work?
Core team member of rrweb here. rrweb and other tools that do user session recording heavily depend on the mutation observer <a href="https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/API/MutationObs...</a> to surface changes to a webpage, each of those changes gets serialized into a JSON event and get captured for later viewing.
Apart from the things we can get directly from the Mutation Observer we also end up monkey patching things like the <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/API/WebGLRender...</a> and record each call that is done to it so that we can play those back in the same order for replay.<p>Before we do all of the diff recording we start with a full snapshot of the webpage which captures every dom node. And it does a lot more than just your plain old `document.documentElement.outerHTML`, it captures contents of style sheets, images, scroll positions etc.. We use those full snapshots as keyframes to base our diffs off of. This works in a similar way to how video files work, but the snapshots don't contain any pixel data, just DOM information.<p>On replay we go ahead and rebuild the DOM as close to how it was recorded and we then animate the changes as they occurred.<p>Let me know if you'd like some more detail on a specific part of any of this.<p>There is also a video which explains some of this which might be helpful: <a href="https://youtu.be/cWxpp9HwLYw?t=1163" rel="nofollow">https://youtu.be/cWxpp9HwLYw?t=1163</a>
OR is opensource so you can pretty much explore it on your own but TLDR is that it records diffs of DOM, network and state alongside with mouse and send it as a byte array batches to the backend that gzip this files, and then this process is going the other way (unpack, bytes to diff to display) in the player parts.<p>Videos would take too much space plus API is limited as I can see on MDN