Back in the dawn of time the machines I worked on had application-specific instruction set architectures. IBM made 'scientific' and 'business' computers. The 360 line converged these.<p>With microcoded architectures today it would be possible to dynamically load a custom application-specific instruction set into microcode from an FPGA. This could greatly increase the efficiency of certain kinds of computation. The FPGA can be dynamically updated with new microcode architectures.<p>For example, a lisp-machine architecture for running lisp code, a prolog machine architecture for prolog code handling backtracking, an APL architecture for array processing, a SQL architecture for databases, etc.<p>These instruction sets could be dynamically swapped. For example, the register bank could be configured to match a particular SQL table structure and manipulated with SQL-specific instructions. Register banks configured as content-addressable memory could greatly speed up table searches.<p>In particular, with RISC-V, one could define a special-case extension instruction set that could be 'swapped in' to the microcode, making it ideal for handling special purpose hardware like a GPU for bitcoins or a quantum computer instruction set handling unitary matrices.<p>I feel we've reached the limits of things a general purpose architecture can do.<p>Intel has an FPGA/CPU pair (which unfortunately I can't get because I'm not a huge corporation) but I don't think the FPGA/CPU can modify the CPU microcode. Perhaps they might hit on the idea with their marriage to the RISC-V community.<p>The ability to modify the data paths in a set of general purpose processor components (e.g. register banks, caches, integer ALU, float ALU, vector ALU, pipeline lookahead, etc) for specific applications by modifying the microcode would be a real leap forward.