I wrote up a response to this but haven't found a good place to put it; might as well post it here.<p>1) I think the idea about such an API encouraging good practice is very much correct, in the sense that it makes it harder to interleave DOM modification and layout flushes. That might make it worth doing on its own.<p>2) DOM APIs are not necessarily _that_ cheap in and of themselves (though they are compared to layout flushes). Setting the various dirty flags can actually take a while, because in practice just marking everything dirty on DOM mutation is too expensive, so in practice UAs limit the dirty marking to varying degrees. This trades off time inside the DOM API call (figuring out what needs to be dirtied) for a faster relayout later. There is some unpredictability across browser engines in terms of which APIs mark what dirty and how long it takes them to figure that out.<p>3) The IDL cost is pretty small in modern browsers, if we mean the fixed overhead of going from JS to C++ and verifying things like the "this" value being of the right type. When I say "pretty small", I mean order of 10-40 machine instructions at the most. On a desktop machine, that means the overhead of a thousand DOM API calls is no more than 13 microseconds. On mobile, it's going to be somewhat slower (lower clocks, if nothing else, smaller caches, etc). Measurement would obviously be useful.<p>There is the non-fixed overhead of dealing with the arguments (e.g. going from whatever string representation your JS impl uses to whatever representation your layout engine uses, interning strings, copying strings, ensuring that provided objects, if any, are the right type, etc). This may not change much between the two approaches, obviously, since all the strings that cross the boundary still need to cross it in the end. There will be a bit less work in terms of object arguments, sort of. But....<p>4) The cost of dealing with a JS object and getting properties off it is _huge_. Last I checked in Gecko a JS_GetProperty equivalent will take 3x as long as a call from JS into C++. And it's much worse than that in Chrome. Most of the tricks JITs use to optimize stuff go out the window with this sort of access and you end up taking the slowest slow paths in the JS engine. What this means in practice is that foo.bar("x", "y", "z") will generally be much faster than foo.bar({arg1: "x", arg2: "y", arg3: "z"}). This means that the obvious encoding of the new DOM as some sort of JS object graph that then gets reified is actually likely to be much slower than a bunch of DOM calls right now. Now it's possible that we could have a translation layer, written in JS such that JITs can do their magic, that desugars such an object graph into DOM API calls. But then that can be done as a library just as well as it can be done in browsers. Of course if we come up with some other way of communicating the information then we can possibly make this more efficient at the expense of it being super-ugly and totally unnatural to JS developers. Or UAs could do some sort of heroics to make calling from C++ to JS faster or something....<p>5) If we do provide a single big blob of instructions to the engine, it would be ideal if processing of that blob were side-effect free. Or at least if it were guaranteed that processing it cannot mutate the blob. That would simplify both specification and implementation (in the sense of not having to spell out a specific processing algorithm, allowing parallelized implementation, etc). Instructions as object graph obviously fail hard here.<p>Summary: I think this API could help with the silly cases by preventing layout interleaving. I don't think this API would make the non-silly cases any faster and might well make them slower or uglier or both. I think that libraries can already experiment with offering this sort of API, desugaring it to existing DOM manipulation, and see what the performance looks like and whether there is uptake.