If anybody else was curious, it appears that the performance win of use of this instruction looks to be about 1-2% in general javascript workloads: <a href="https://bugs.webkit.org/show_bug.cgi?id=184023#c24" rel="nofollow">https://bugs.webkit.org/show_bug.cgi?id=184023#c24</a>
Given that the instruction set already has a float to integer conversion it seems likely that the overhead of implementing this would be small and so given the performance (and presumably energy) win quoted elsewhere seems like a good move.<p>It would be interesting to know the back story on this: how did the idea feed back from JS implementation teams to ARM. Webkit via Apple or V8 via Google?
Unless I misread the current arm docs, I don't think this is still present in the ISA as of 2020?<p>The whole RISC/CISC thing is long dead anyway, so I don't really mind having something like this on my CPU.<p>Bring on the mill (I don't think it'll set the world on fire if they ever make it to real silicon but it's truly different)
It strikes me as ironic that an architecture that used to pride itself on being RISC and simple is heading in the same direction as intel-levels of masses of specialist instructions.<p>I don't mean this as a criticism, I just wonder if this is really the optimum direction for a practical ISA
Emery Berger argues that the systems community should be doing exactly this -- improving infrastructure to run JS and Python workloads:<p><a href="https://blog.sigplan.org/2020/10/12/from-heavy-metal-to-irrational-exuberance/" rel="nofollow">https://blog.sigplan.org/2020/10/12/from-heavy-metal-to-irra...</a><p><i>We need to incorporate JavaScript and Python workloads into our evaluations. There are already standard benchmark suites for JavaScript performance in the browser, and we can include applications written in node.js (server-side JavaScript), Python web servers, and more. This is where cycles are being spent today, and we need evaluation that matches modern workloads. For example, we should care less if a proposed mobile chip or compiler optimization slows down SPEC, and care more if it speeds up Python or JavaScript!</i>
It seems like every 2 months I feel the burn of JS not having more standard primitive types and choices for numbers. I get this urge to learn Rust or Swift or Go which lasts about 15 minutes... until I realize how tied up I am with JS.<p>But I do think one day (might take a while) JS will no longer be the obvious choice for front-end browser development.
A good follow up to "HTML5 Hardware Accelerator Card" from yesterday: <a href="https://news.ycombinator.com/item?id=24806089" rel="nofollow">https://news.ycombinator.com/item?id=24806089</a>
Anyone else remember the ARM Jazelle DBX extension? I wonder if they'll end up dumping this in this the same way.<p>I don't remember very many phones supporting DBX, but IIRC the ones that did seemed to run J2ME apps much smoother.
So it was easier to add an instruction in silicon to cater for an ill-designed programming language, than to change the language itself?<p>I mean, if float-to-integer performance is so critical, why was this not fixed a long time ago <i>in the language</i>? What am I missing?
FTA:<p>> Which is odd, because you don't expect to see JavaScript so close to the bare metal.<p>This seems to ignore all the work done on server-side javascript with projects such as node.js and deno, as well as the fact that cloud service providers such as AWS have been developing their own ARM-based servers.