Neural networks are largely operating on 32 or 64 bit floating point numbers. Many millions of them. If you think how this works at the hardware level there are many little cores in a GPU for operating on matrices of these numbers. To feed a 32 bit float into one of these, it requires 32 wires - deep in the silicon.<p>Also note that neural networks are inherently noisy. We often insert a bit of noise into various parts of the computation graph.<p>In analog circuity only a single wire (or maybe two if you’re using a differential pair) would be needed to represent a noisy float.<p>If we had some sort of IC that could dynamically configure large analog computations, it may allow NN compute-graph computations to be improved by orders of magnitude. 1 wire instead of 32. Real noise instead of artificial.<p>Have people ever tried to something like an analog FPGA?