Quick explanation of compiled sprites:<p>Most commonly a sprite is represented as a 2d array of pixels that you for X, for Y over and use math or branching to blend on to the screen. But, that's a lot of reading and writing, and a lot of the math ends up doing nothing because a lot of the pixels are intentionally invisible.<p>So, you could do some sort of visible/invisible RLE to skip over the invisible pixels. That's better, but it's still a complicated loop and still a lot of reading pixels.<p>So, many crazy democoders have decided to write "sprite compilers" that read the 2D color array of a sprite and spits out the assembly for the exact instructions needed to write each visible pixel one at a time as a linear instruction sequence with no branching. The sprites are then assembled and linked into the program code as individual functions. I believe they can even exclusively use immediate values encoded inside the instructions rather than reading the colors from a separate memory address. So, rather than read instruction, read data, write data; it becomes a read instruction, write data in two straight lines in memory.
The party this was released at had some absolutely mind-blowing stuff. The 8k and 64k competitions were <i>amazing</i>. The demoscene continues to be an astonishing force in computing.<p>Some of the stuff would be completely at home as installation art in any top modern art museum. My favorite is <a href="https://www.youtube.com/watch?v=XF4SEVbxUdE" rel="nofollow">https://www.youtube.com/watch?v=XF4SEVbxUdE</a> done in 64kb!<p>Here's the results with links to productions (most of them have youtube videos by now)<p><a href="http://www.pouet.net/party.php?which=1550&when=2015" rel="nofollow">http://www.pouet.net/party.php?which=1550&when=2015</a>
I found the raw video awesome enough to submit a few days ago, but the super-detailed explanation in this blog raises this to a whole new level of epic.<p>I wonder if youngsters who didn't grow up thinking 1 MHz is a perfectly acceptable CPU speed and that 640 KB is a whole lot of RAM will understand what the fuss is about here...
These days I find it slightly weird that they don't share the source code of the demos or related tools. Demo scene has this wonderful alpha-male thing going.
Amazing. 80×100 resolution 1024-color mode doing CRT controller (6545) tricks, on plain IBM CGA adapter. Also fullscreen bitmap rotation and 3D rendering on a 4.77 MHz Intel 8088 CPU. Wow.
Now, I wonder what people could be doing 10, 20 years from now on today's hardware<p>Probably a lot less tricks up the sleeve are possible (especially with 3D accelerators and dependency on a lot of proprietary AND very complex software)<p>Or maybe just drop to the framebuffer and push pixels like it has always been done
I'm interested to know why it breaks all the emulators. Certainly I wouldn't expect emulators to reproduce all the graphical glitches that this takes advantage, but what is it doing that actually <i>crashes</i> them?
Slight technical inaccuracy at the start: the Z80 also required a minimum of 4 clocks for a memory access, it wasn't better than the 8088 in that regard.