Well, I'm impressed. I can't speak to efficiency, but reading through the spec immediately let's me see how one might implement an allocator atop the basic system, and gives me an idea of how one might wrangle predictive jumps if one were optimising aggressively, and so on.<p>This is one of the most _predictable_ instruction sets I've seen - and that I like. It takes time to learn nuances, but with this it seems that the documentation is right up there with the best, making that learning curve considerably less.
Important bit of historical context is that "micro" in "microprocessor" in this case refers to fact, that it is microcoded, not that it is implemented as single VLSI chip.
Reading these historic documents always gives me an itch to "OK. What would a CADR look today?" myself into designing a contemporary version with wider addresses, registers and so on. Of course, I never finish.
Looks interesting, but I'm too ignorant to understand it...<p>What is the practical significance of this? Is this the design for the processor that was used in the actual Lisp Machines, or a design for a hypothetical processor?
Note: the corresponding emulator is there : <a href="https://github.com/LM-3/usim" rel="nofollow">https://github.com/LM-3/usim</a>
<a href="https://lm-3.github.io/" rel="nofollow">https://lm-3.github.io/</a><p>Seems to be a better link, points to other materials etc.