- LLMs/Deep learning in general - People have no idea how powerful the abstractions grown inside an LLM are. They could be many times more powerful that the output, as the output function could be lossy, and we'd never be able to tell from outside the black box. It's entirely possible that there are signals in there that do "theory of mind" far better than humans ever could.<p>It's like applying a billion years of directed evolution at Earth, just to get a small set of ejecta to hit Mars. Sure, we've shipped a ton or two of stuff to Mars, and colonized it with robots... but we've done a few other things along the way.<p>The fact that I can run trained models on my laptop and desktop without issue is amazing to me. Like my VAX 11/780 running VMS 7.3, I eventually hope to run an LLM on my cheap smartphone.<p>- Capability Based Security - I lived through the era in the 1980s when you could just buy a stack of floppy disks full of programs, and try them out on your PC, with no worries at all. The only way we'll ever get back there is if we finally get Capability Based Security built into our Operating Systems.<p>Keeping WASM free of the POSIX virus is a close second, but not as generally useful.<p>- MIMO Software Defined Radio - $30 SDR dongles are amazingly useful, but I look forward to being able to build a passive radar system that can detect EVERYTHING in the sky overhead, no matter how stealthy, for my own amusement. I'd like to be able to do moonbounce communications from a <$1000 flat panel MIMO array mounted on my garage roof.<p>- NanoVNA - these little things are freaking amazing, and dirt cheap. Quite handy for building an intuitive understanding of the otherwise black magic of RF design.<p>- Quantized Inertia - A new theory of physics about to get flight tested in space[1] It would be good to see some advancement towards making the human species Interstellar<p>- BitGrid - my own hallucination of the simplest possible Petaflop performance CPU. I'm stuck in analysis paralysis and need a shove. It's just a sea of 4x4 LUTs and latches, so it's slower than an FPGA, but it's massively parallel, so if you can spread your algorithm across it like peanut butter, you can put data in one side, and get a stream of a billion answers/second out the other side.<p>Imagine being able to run GPT-4 spread across 1000 of these things.... it might take a second to get a token out, but you could have millions of simultaneous sessions going<p>[1] <a href="https://celestrak.org/NORAD/elements/graph-orbit-data.php?CATNR=58338" rel="nofollow noreferrer">https://celestrak.org/NORAD/elements/graph-orbit-data.php?CA...</a>