4chan discussion aside, the concept here is that you 'sleep' <i>n</i> units of time for each element, where <i>n</i> is linearly related to the lexical relationship of the element to the other elements in the list. The 'aha' was that the resulting threads will 'wake up' or 'return' in lexical order.<p>Basically if you transform set L into the time domain, when collecting it back you get it back sorted.<p>Its a fun result, and as an exercise in systems analysis it can be enlightening to look at the impact of timer resolution, thread creation, and ordering, but ultimately the 'trick' is that you've exploited the 'insertion sort' that the kernel does on sleep events. You could try its close cousin "priority sort" where you create threads where you set the priority of each to be related to the value 'n' of the element, and all fractionally lower than the parent thread (most UNIXes are not that flexible but some realtime OSes are) then as the last step you 'yield' and the threads print out their contents in priority order and poof they are sorted. But again, the sorting happened in the kernel when they got inserted into the runq in priority order.
I was impressed by the brevity of the C solution using OpenMP (comment #81), especially compared to significantly more verbose Erlang and Haskell examples. I kept reading it, thinking, "Where's the fork?" and seeing nothing but a simple for...and then remembered that # is a preprocessor directive, rather than a comment, in C, and then googled for "parallel omp for". So, I actually learned something today from that ridiculous thread: <a href="http://en.wikipedia.org/wiki/OpenMP" rel="nofollow">http://en.wikipedia.org/wiki/OpenMP</a><p>Also, the Perl 6 one-liner is sort of laughably cute . I'm not sure I believe it would actually work, but if it does, the conciseness of Perl 6 is even higher than I'd realized; dropping about 9 words from the Perl 5 one-liner (which is already very small). But, I learned about the hyper operators in Perl 6, which is also cool: <a href="http://perl6advent.wordpress.com/2009/12/06/day-6-going-into-hyperspace/" rel="nofollow">http://perl6advent.wordpress.com/2009/12/06/day-6-going-into...</a><p>In short, that was awesome reading over my morning tea.
That was a brilliant joke, hardest I've laughed all day. Some choice comments:<p><pre><code> I don't like to wait 218382 seconds to sort '(0 218382)
If you slept exp(n) instead of n it could easily include
negative integers too*!*
Oh shit, we're busted, there's a REALISTICALLY THINKING
PERSON in this thread!</code></pre>
For those few of you not conversant with 4chan, the <i>dis</i> subdomain is text only, like the original 2ch. The textboards tend to be disdainful of the imageboards, and incidentally, they get much less traffic, since it's impossible to post porn on them.<p>The mailto:sage links in some of the posts are essentially public downvotes; from the Japanese "下げ", meaning "down", and pronounced "sah ge".
Many of the posts here and on 4chan have deconstructed the algorithm and proven that it is actually O(n^2) or O(n * lg n) when you include the work done by the operating system's scheduler.<p>However, here's a different perspective to look at this from: What if this were implemented on an FPGA?<p>Let's ignore for a moment that the number of gates needed would increase linearly with the size of the input. I'll also simplify the math by assuming that a list of n numbers contains only values in the range [0 .. n] .<p>Let's further assume that we're only sorting positive integers, and that each job sleeps for x clock cycles (where x in the input number). We could sort a list in O(n) time.<p>Digging even deeper: Quicksort performs an integer comparison followed by a branch O(n lg n) times. Even if your processor can compare two integers in one clock cycle, a branch is probably going to screw up your processor's pipelining and take quite a few clock cycles to execute. So, we're possibly looking at a significant speed increase even before we consider the difference in order of growth.<p>So, assuming a predefined upper bound on the number of items in a list, this just might be a great way to perform hardware-assisted sorts.<p>Thoughts?
Reminds me of "ListBox sort" story I heard once; it was about a programmer who, wanting to sort strings and not knowing how to do this, put strings into a hidden ListBox GUI control and then read them back in sorted order.
It's actually an interesting solution. If you reduced the time waited to a microsecond, then it would take about a second to sort 1M elements. Definitely not an optimal solution, and has a bit of potential for race conditions (if one thread/process got hung up on I/O), but interesting none the less.
This isn't interesting at all. every thread sleeps for that long. When you sleep, something like this happens:<p>The OS is told to wake up your thread after x time has elapsed. It puts this request into some data structure which it does something like poll from time to time so it knows what thread to wake up at what time. You are offloading your sort to the OS, which is probably then doing something like putting it into a linked list of timeouts that have been registered (maybe a heap?). After approximately the allotted time, some mechanism akin to a signal is used to tell your thread to wake up.<p>Also, usually the thread will just sleep AT LEAST as long as you tell it to sleep, but if the system is busy it could sleep a long time more. Threads can be woken up at the same time or slightly out of order, or 'woken up' only to find that all the cpu's are working on something else, and thus waiting for a time slice, getting one, then waiting again, and not getting to use stdout for <i>weeks</i>. It's uncommon to see machines with really high load these days, but a 1 core computer with 25000 processes all competing for CPU time, and which is swapping heavily, will potentially run each process millions of times slower than running one process.<p>TLDR; This is curious, but will fail much of the time, has huge memory overhead. Lots of good sorts are in place, this one requires something like 8mb per thread on linux due to the stack allocated for the thread. At best this takes millions of years potentially, and just does some other sort anyway.
I used to work as an editor for freshmeat.net and we would get hilarious submissions like this quite frequently. For example, the "revolutionary new file compression algorithm" that quite literally converted a file into binary and removed all the 0s. Such a shame that the author "hadn't gotten around to" writing a de-compression tool!<p>We were often quite sad that we <i>couldn't</i> publish some of these gems, they were brilliant.
One of my less well received interview questions is: "Okay, we both know theoretical lower limits on sorting. Can you come up with a terminating algorithm that is most /pessimal/?"<p>Usually I get a blank stare. Sometimes I get a great answer.
What's interesting is that in theory, assuming an appropriate scaling factor to divide the input by, the time and space complexity for Sleep sort are O(1). This is basically like hashing numbers, then iterating the buckets -- except that the iteration is done over time. Given that, one could envision degenerative cases for which Sleep sort could in theory outperform a standard algorithms like quicksort.
Optimize it by making the sleep time be log(x) instead of x or a fraction of x. That makes it O(log n) complexity :). Use any monotone increasing function really.
Hilarious, though not strictly correct. There is no guarantee 'sleep n' sleeps for n seconds, or that 'sleep n' wakes up before 'sleep n+1'. Granted, in reality if scheduling is so far off that things like this on the order of seconds become a problem, you've probably got bigger problems...
Are those of you whom are saying there is <i>anything at all</i> good or usable about this approach just trolling? <i>Pleaaase</i> just be trolling, if my opinion of humanity goes any lower I'll have to start building a 'compound'.
I think the more interesting solutions are deeper in the thread mainly #43, #44, #83.<p>Mostly just for fun but its neat to see an Erlang solution since this should be right up its alley.
I dont think this sort is dependable for sorting floats in close ranges. I tried it with, ./sleepsort.bash 1 1.01 1.01 1.02<p>and output was<p>1<p>1.01<p>1.02<p>1.01<p>Clever hack, nonetheless.
my first reaction was it's going to be O(9^n) worst case - but there is & after f "$1" which spawns its own process for each number. Not bad - maybe its a good algo if all are single digits