Reservoir Computing and Liquid State Machines are deep stuff!<p>"I'll tell you why it's not a scam, in my opinion: Tide goes in, tide goes out, never a miscommunication." -Bill O'Reilly<p>Reservoir computing (wikipedia.org)
99 points by gyre007 on Oct 18, 2018 | hide | past | favorite | 20 comments<p><a href="https://news.ycombinator.com/item?id=18252958">https://news.ycombinator.com/item?id=18252958</a><p><a href="https://en.wikipedia.org/wiki/Reservoir_computing" rel="nofollow">https://en.wikipedia.org/wiki/Reservoir_computing</a><p><a href="https://en.wikipedia.org/wiki/Liquid_state_machine" rel="nofollow">https://en.wikipedia.org/wiki/Liquid_state_machine</a><p><a href="https://news.ycombinator.com/item?id=29654050">https://news.ycombinator.com/item?id=29654050</a><p>DonHopkins on Dec 22, 2021 | parent | context | favorite | on: Analog computers were the most powerful computers ...<p>You should real Steven Wolfram's "A New Kind of Science", and you'll get a much deeper and wider appreciation for just what is a computer and how Turing completeness can apply to so many situations. Even the simplest systems can be universal computers!<p><a href="https://en.wikipedia.org/wiki/A_New_Kind_of_Science" rel="nofollow">https://en.wikipedia.org/wiki/A_New_Kind_of_Science</a><p>>Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.<p>Even a reservoir of water (or a non-linear mathematical model of one) can be used to piggyback arbitrary computation on the way liquid naturally behaves.<p>Here's a paper about literally using a bucket of water and some legos and sensors to perform pattern recognition with a "Liquid State Machine" (see Figure 1: The Liquid Brain):<p><a href="https://www.semanticscholar.org/paper/Pattern-Recognition-in-a-Bucket-Fernando-Sojakka/af342af4d0e674aef3bced5fd90875c6f2e04abc" rel="nofollow">https://www.semanticscholar.org/paper/Pattern-Recognition-in...</a><p>>Pattern Recognition in a Bucket. Chrisantha Fernando, Sampsa Sojakka. Published in ECAL 14 September 2003, Computer Science.<p>>This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network.<p>This idea can be applied to digital neural networks, using a model of a liquid reservoir as a "black box", and training another neural network layer to interpret its output in response to inputs. Instead of training the water (which is futile, since water will do what it wants: as the apologetics genius Bill O'Reilly proclaims, "Tide goes in, tide goes out, never a miscommunication."), you just train a water interpreter (a linear output layer)!<p><a href="https://www.youtube.com/watch?v=NUeybwTMeWo" rel="nofollow">https://www.youtube.com/watch?v=NUeybwTMeWo</a><p>Reservoir Computing<p><a href="https://en.wikipedia.org/wiki/Reservoir_computing" rel="nofollow">https://en.wikipedia.org/wiki/Reservoir_computing</a><p>>Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir.[1] After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output.[1] The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed.[1] The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost.[2]<p>>History: The concept of reservoir computing stems from the use of recursive connections within neural networks to create a complex dynamical system.[3] It is a generalisation of earlier neural network architectures such as recurrent neural networks, liquid-state machines and echo-state networks. Reservoir computing also extends to physical systems that are not networks in the classical sense, but rather continuous systems in space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on inputs given as perturbations of the surface.[4] The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling.[3] However, training of recurrent neural networks is challenging and computationally expensive.[3] Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer.[3]<p>>A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy efficient compared to electrical components.<p>>Recent advances in both AI and quantum information theory have given rise to the concept of quantum neural networks.[5] These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems.[5][6] In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid.[6] However, the nuclear spin experiments in [6] did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of a random kitchen sink[7] algorithm (also going by the name of extreme learning machines in some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices.[6] In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers.[8]<p>>Reservoir computers have been used for time-series analysis purposes. In particular, some of their usages involve chaotic time-series prediction,[9][10] separation of chaotic signals,[11] and link inference of networks from their dynamics.[12]<p>Liquid State Machine<p><a href="https://en.wikipedia.org/wiki/Liquid_state_machine" rel="nofollow">https://en.wikipedia.org/wiki/Liquid_state_machine</a><p>>A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units.<p>Echo State Network<p><a href="https://en.wikipedia.org/wiki/Echo_state_network" rel="nofollow">https://en.wikipedia.org/wiki/Echo_state_network</a><p>>The echo state network (ESN)[1][2] is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.