Tanenbaum wrote a textbook, Distributed Operating Systems[1], which covers in some detail the design and implementation of the Amoeba system. I recommend checking it out if you're interested in these things. The detail I enjoyed the most is how the use of multiple independent hardware platforms allowed a user on the overall system to execute binaries for multiple architectures seamlessly. Not the most useful feature, but still neat in my book. I also remember reading it around the same time I was just getting started with the Python programming language unrelatedly, and was very interested to learn later that Python originated with the Ameoba project.<p>I enjoy all of Tanenbaum's textbooks, but their price here in the US is very high if you're just a hobbyist, or a professional for whom operating system design is a hobby interest, so you may want to get them from the library. (Or, for what it's worth, my personal copies are from India and were much cheaper.)<p>[1]: <a href="http://www.amazon.com/Distributed-Operating-Systems-Andrew-Tanenbaum/dp/0132199084" rel="nofollow">http://www.amazon.com/Distributed-Operating-Systems-Andrew-T...</a>
The current title ("A way to bring back Moore's law") is somewhat misleading. You could say that Amoeba was a way to bring back Moore's law at the time, but then again, in the 1990s it wasn't exactly threatened. Nowadays the techniques used in Amoeba have spread to your "regular" Linux computing clusters. I've done some work in distributed computing and I can say that sophisticated load balancing and, to a lesser degree, process migration are known and used in the field. Lots of solutions for both for the commonly used MPI family of frameworks have been published [1].<p>It all comes down to Gustafson's Law [2].<p>[1] E.g., <a href="http://capinfo.e.ac.upc.edu/PDFs/dir25/file003041.pdf" rel="nofollow">http://capinfo.e.ac.upc.edu/PDFs/dir25/file003041.pdf</a>, <a href="http://www.mosix.org/pub/Process_Migration_for_OpenMPI.pdf" rel="nofollow">http://www.mosix.org/pub/Process_Migration_for_OpenMPI.pdf</a>, etc.<p>[2] <a href="https://en.wikipedia.org/wiki/Gustafson%27s_Law" rel="nofollow">https://en.wikipedia.org/wiki/Gustafson%27s_Law</a>
There's only so much parallelism you can exploit, especially with network lags adding overhead.<p><a href="http://en.wikipedia.org/wiki/Amdahl%27s_law" rel="nofollow">http://en.wikipedia.org/wiki/Amdahl%27s_law</a>
My first instinct was to bust out a link to the "Network Fallacies" paper and a snarky remark, but it occurred to me that it's due to projects like Amoeba, Sprite and Plan 9 that we even really understand those fallacies.