C's qsort() is notoriously bad because the comparison function isn't inlined, meaning the majority of time is spent in call overhead.<p>Zhangxp1998 ported quadsort to C++ and compared with std::sort and found it to be slower: <a href="https://gist.github.com/zhangxp1998/0e2fa30656c894017d183e0dbf2d862a" rel="nofollow">https://gist.github.com/zhangxp1998/0e2fa30656c894017d183e0d...</a><p>Shameless self promotion, if you want a faster sorting implementation, check out pdqsort (<a href="https://github.com/orlp/pdqsort" rel="nofollow">https://github.com/orlp/pdqsort</a>) which is almost twice as fast as libstdc++'s std::sort for randomly shuffled integers, and has all sorts of tricks for input distributions with patterns (like pre-sorted, reverse sorted, single out-of-order element appended, etc).
How does this fare against Python's famous Timsort (used by several languages and systems)? How about the dual-pivot quicksort used by Java for primitive arrays?<p>Someone has to have put together a nice benchmark for comparing many sorting algorithms. I wish that the author had done some benchmarking first, so that the proposed algorithm can properly be positioned w.r.t. state-of-the-art techniques.
Summary: this is a non-recursive merge sort with improvements.<p>Benchmark of quadsort() versus C qsort():<p>* ~10x faster on forward-order items<p>* ~2x faster on reverse-order items<p>* ~equivalent on random-order items<p>Improvements:<p>* Ordering: when blocks of items are in order, or in reverse-order, then do special case handling, which gives quadsort O(n + log n) instead of qsort O(n * log n).<p>* Boundaries: compare data rather than traditional merge sort wasteful boundary checks.
Quick sort is not the fastest sorting algorithm. It would be nice if there is a benchmark comparing with other state of the art algorithms like Timsort.
Interesting, but I'm surprised if this is the first time we have sorting algorithm that is swapping more than two elements at a time. I would have guessed every possible iteration of sorting algorithms has been already explored, proven and tested.
This reminds me of a programming exercise I was asked to write when I first learned programming: write a sorting program generator that given N, generates a program that sorts an array of N elements optimally: the generated code has N! branches, one for each possible permutation. With some CSE help from the compiler, it can be really quite fast at the expense of code size.<p>The author's explanation isn't entirely clear, but it seems similar to the above construction with a fixed N and then a merge sort afterwards.
In his benchmark, the author is assuming that qsort() is implemented using quicksort, but that's not necessarily true. For example, glibc is using mergesort (although it falls back to quicksort if the system is short on memory).
Since we're already using O(N) space, it would be interesting to see how this compares to Radix sort[0], which is O(N) space but O(N) time ( due to just hashing everything ).<p>Like others have said, it would be cool to see quadsort stacked up to other current state-of-the-art sorting algorithms.<p>[0] <a href="https://en.wikipedia.org/wiki/Radix_sort" rel="nofollow">https://en.wikipedia.org/wiki/Radix_sort</a>
I've never seen a sorting algorithm that uses a non-binary comparison function to order values. Is that a novel technique?<p>It seems really obvious in hindsight, so I'm sure there's just prior art I don't know about.
This Quicksort is almost twice as fast<p><a href="https://rextester.com/XHCGA23293" rel="nofollow">https://rextester.com/XHCGA23293</a>
I'm a bit sceptical because I don't see any mathematical proof. Only benchmarks which do not prove a lot. It may be faster only by a constant factor on a particular machine but for sufficiently large n it would be as fast as mergesort.<p>1000000 is peanuts. We need to see convergence for about billions+ of numbers.