Is an interesting thought, I think. If a comparison is likely to be wrong 10% of the time, how can you get it so that standard operation reduces the impact of the 10% of errors?<p>The bubble sort idea was that, since you compare things more often in bubble sort, you are more likely to correct one of the bad comparisons with a good comparison later? Versus insertion sort, where the insertion phase is more likely to compound the errors? (Effectively, if you have an out of order item "poison pilling" one side of your insert phase, it will exacerbate the error.)<p>I can see why that idea is appealing. I can also see why a sanity check pass should help, there.<p>A few questions. One, curious how much this impacts the runtime of the insertion sort? Having it in an infinite loop until it looks sorted feels a bit scary. I am curious if this is related to some sampling strategies out there? Finally, would be fun to see the "robustness score" across different rates of comparison failure.