15% faster is great. But at what cost?<p>> Since Ruby 3.3.0-preview2 YJIT generates more code than Ruby 3.2.2 YJIT, this can result in YJIT having a higher memory overlead. We put a lot of effort into making metadata more space-efficient, but it still uses more memory than Ruby 3.2.2 YJIT.<p>I'm hoping/assuming the increased memory usage is trivial compared to the cpu-efficiency gains, but it would be nice to see some memory-overhead numbers as part of this analysis.
PHP went through some crazy performance improvements from PHP 5.6 to 7.0, in some cases running twice as fast.<p>It's good to see Ruby doing the same. There is something neat about the same code running faster, solely by being on an upgraded platform.
I'm probably misinterpreting the numbers, but it sounds like the 3.3 interpreter also got some significant performance improvements - if 3.3 YJIT got a 13% speedup compared to 3.2 YJIT and a 15% speedup compared to 3.3 interpreter, that sounds like the 3.2 YJIT has only slightly better performance than the 3.3 interpreter. Is that interpretation correct? If so, what were the improvements in the 3.3 interpreter, or was 3.2 YJIT just not much of a speedup?
Not to be pessimistic, but does this matter? Rails apps take 2-3x more resources to run than most other language stacks, including other dynamic languages, (including Perl!).