I like the Arc idiom for this. Here's how to sort a list (say of stories) by some computed value (say a function called score):<p><pre><code> (sort (compute > score) stories)
</code></pre>
If score is expensive, you just wrap it in memo:<p><pre><code> (sort (compare > (memo score)) stories)
</code></pre>
Now it will compute each score once, save them in a table, and throw the table away when done.
A possibly counterintuitive case where "the key...is expensive to compute" can be when accessing the key just requires following a pointer to something not in cache. It's not the same thing syntactically, but, for example, Postgres sped up collation by packing bytes into integers:<p><a href="http://pgeoghegan.blogspot.com/2015/01/abbreviated-keys-exploiting-locality-to.html" rel="nofollow">http://pgeoghegan.blogspot.com/2015/01/abbreviated-keys-expl...</a><p>I've also seen speedups sorting strings by first sorting prefixes of them packed into uint64s (rather than sorting pointer/length references to string data), and in an application where a friend needed to sort a lot of indices into an array, it turned out to greatly speed the sort (by avoiding indirections) to copy the whole array, sort the indices and the copy of the array together, then toss out the copy.
tl/dr, a sort optimization: avoid recomputing the sort keys by temporarily associating them with the input items.<p>Wouldn't it be embarrassing to have one's name in the title of a long winded Wikipedia article for a super simple and obvious optimization.