I've been watching those mesmerizing YouTube videos visualizing sorting algorithms lately. The header of this article uses a screen cap from one of them.<p>Them: So what shows do you watch?<p>Me: ... It's complicated.<p>There are a lot of different sorting algorithms. Like, <i>a lot</i>, a lot.<p>As I watch them, I try to figure out what they were optimizing for. Some only scan in one direction. Some only use the swap operation. Some seem to do the minimum number of writes. Some are incremental performance improvements over others.<p>When I see an algorithm like this, I don't assume the person who wrote it was an idiot. I assume they were optimizing for something that's not obvious to me. Its only modifying operation is swap, so maybe that operation is faster than an arbitrary insert for whatever system or data structure they're using. There are no temporary variables besides loop counters, so maybe they're on a memory-constrained environment. There's barely any code here, so maybe this is for a microcontroller with precious little ROM. Or maybe they're applying this as a binary patch and they have a strict ceiling to the number of ops they can fit in.<p>Or maybe it's just the first sorting algorithm they could think of in an environment that doesn't ship with one and the performance is adequate for their needs. In that case, it's optimized for developer time and productivity. And honestly, it's a far more elegant algorithm than my "naive" version would be.<p>These are all valid reasons to use a "naive" sorting algorithm.