I feel like asking: did they solve the problem?<p>Let me see if I can state the problem: Neural Networks are non-linear because of their activation functions. You need a differentiable function in order to take the derivative so you can back-prop the error, more or less.<p>The consequence of the non-linearity is that you can't do some kind of short hand calculation to figure out what the network will do. You have to crank through the network to see the result. There is no economy of thought. That is to say, there is no theory.<p>I am excited that they are working on it, but I would love to have a summary or overview of how they approach what I consider to be the basic problem.