Can someone explain why the "Quick Fix" even works? It seems like if having the DOM updates separated by one statement triggers two reflows, so would two consecutive updates. Or more generally, what does "batching DOM updates" really mean?<p>Does the browser just pay attention to whether each line of JS updates the DOM and queue up its updates until it encounters one that doesn't? Doesn't fit my model for how the JS engine fits into the browser. I guess I don't really know, but I always assumed it just reflowed on a fixed timeout.<p>Edit: Nevermind, I get it: it's that the intervening statement <i>reads</i> from the DOM, thus triggering a flush. I just missed that in the article.
This seems to be the equivalent of Flex's 'callLater()', which was the bane of my life back when I did Flex, as it almost completely decouples the called code from the calling code -- very difficult to work out what called the code if it's failing and very difficult (without good comments) to know <i>why</i> it was added.<p>We had a rule: If you think you need a callLater(), you don't need to use callLater(). If you still need a callLater(), you need to get someone to come and look at your code <i>now</i> to tell you that you don't need to use callLater(). If you both agree that you need to use a callLater(), you've still got to justify it at code review time.<p>The biggest difference I can see at the moment is that Flex doesn't recompute layout until the end of the frame, even if you do read from it. JS does recompute, so you need to defer for performance rather than (as in Flex) correctness. In either environment, the sane thing to do is to avoid having to defer your calls at all. It may be more work now, but your sanity will thank you later.<p>As an example of how bad things can get, Adobe's charting components would take more than 13 frames to settle rendering, because of all the deferred processing. This is a good example of how deferring your calls can actually cost you quite a lot of performance.
If you have the default sidebar on Ubuntu 12.04 and a 2560-pixel wide screen, and give Chrome exactly half of the width (I have a grid plugin), some Wikipedia pages will resize themselves about 15 times a second as they realize that they should change their layout, but then the change means that they should shrink down to the previous version, and then that change... can't find one that triggers that or I'd link to a video.
Well-researched feature. The best part of the fastdom wrapper (<a href="https://github.com/wilsonpage/fastdom" rel="nofollow">https://github.com/wilsonpage/fastdom</a>) is that a timeout stub is introduced even for browsers that don't support native animation frames. Good job.
This is something of a solved problem for many of the major javascript frameworks. Sproutcore (just to pick an older example I'm familiar with) has had this licked since 2008; you put all your DOM-upating code in your view update calls, and Sproutcore pipelines the calls. I'm sure most of the other JS MVC frameworks have similar solutions.
This seems very similar to the way mobile devices use vsync/vblank to organize work into frames. [0] Very cool!<p>[0] Good explanation of how this process works on android: <a href="http://www.youtube.com/watch?v=Q8m9sHdyXnE" rel="nofollow">http://www.youtube.com/watch?v=Q8m9sHdyXnE</a>
We use a technique similar to this in Montage, which we call the draw cycle: <a href="http://montagejs.org/docs/draw-cycle.html" rel="nofollow">http://montagejs.org/docs/draw-cycle.html</a>. Because it's built into the components, <i>everything</i> in the webapp reads from the DOM at the same time, and then writes to the DOM at the same time, completely avoiding the thrashing.
There doesn't seem to be a definition of layout thrashing anywhere on the internet. Googling "what is layout thrashing" returns nothing.<p>Anyone want to offer the net's very first ever explicit definition of this term?
Wouldn't another solution to this be to have an object that would mimic the dom, performing reads immediately (or reading from its own cache of written attributes), but allowing explicit control over when writes get committed? It would then be easy to have atomic (wrt dom layout) functions.
The Sencha touch guys passed everything, event events, through a requestAnimationFrame call as an object pool for their Facebook demo and it seemed to scale well. I wonder why more people do not explore that approach