This is just a reflection from recent work on interpreter internals, but I figured it could be relevant to someone walking the same paths.<p>Allocating registers for all local vars statically means scopes have different sizes, which in turn complicates slab allocation and/or reuse. In return for being easier to reason about (except for the main scope issue) and saving space.<p>I opted for a fixed number of linearly assigned registers per scope in Snigl [0]; once the limit is reached, remaining variables are stored in a table. Which means I sort of get both, since additional scopes may be added using {} (it could make sense to add a scope: keyword to Python) if that becomes an issue.<p>It's all compromises, all the way down.<p>[0] <a href="https://gitlab.com/sifoo/snigl" rel="nofollow">https://gitlab.com/sifoo/snigl</a>
Python has some unusual performance behaviors. IIRC you can also speed up the performance of your program a lot by assigning intermediate variables instead of referencing properties, for example:<p>[A.b[i] for i in range(100)]<p>is a lot slower than:<p>B = A.b<p>[B[i] for i in range (100)]
I don’t think I understand the problem. We have n distinct integers, and the array has already been partitioned. Why isn’t the answer just that there was exactly one pivot that could generate that partition?
Found something weird with codes posted in the links. I am getting different outputs for the intermediate and the final codes for the input of "5; 1,2,3,4,4". Can someone help?<p>1. <a href="https://imgur.com/a/u8O65AF" rel="nofollow">https://imgur.com/a/u8O65AF</a><p>2. <a href="https://imgur.com/a/uHniZof" rel="nofollow">https://imgur.com/a/uHniZof</a>
Slightly OT from the main takeaway, but I wonder if this could be sped up further by only doing a single pass, and maintaining a stack (implicitly sorted) of elements which are greater than everything seen so far, and popping them off when they're greater than the current element.