I believe this refers to work presented in
this journal article.
<a href="https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.062207" rel="nofollow">https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.0...</a><p>Abstract: Artificial neural networks are universal function approximators. They can forecast dynamics, but they may need impractically many neurons to do so, especially if the dynamics is chaotic. We use neural networks that incorporate Hamiltonian dynamics to efficiently learn phase space orbits even as nonlinear systems transition from order to chaos. We demonstrate Hamiltonian neural networks on a widely used dynamics benchmark, the Hénon-Heiles potential, and on nonperturbative dynamical billiards. We introspect to elucidate the Hamiltonian neural network forecasting.
Brings to mind this classic from the Jargon File:<p><a href="http://www.catb.org/~esr/jargon/html/koans.html" rel="nofollow">http://www.catb.org/~esr/jargon/html/koans.html</a><p>In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?”, asked Minsky.<p>“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.<p>“Why is the net wired randomly?”, asked Minsky.<p>“I do not want it to have any preconceptions of how to play”, Sussman said. Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.<p>“So that the room will be empty.”<p>At that moment, Sussman was enlightened.
I’ve said this before, but I think that a lack of physical modeling might be the key barrier for AV technology. Human drivers have a mental model of physics that they’ve honed for 17-18 hours a day since they were born.
Why do you need a neural network when you have the Hamiltonian mechanics of the system modeled? I've always understood Langrangian/Hamiltonian mechanics to be methods of modeling the behavior of a system through the decomposition of the external constraints and forces acting on a body. In other words you can understand a complex model by doing some calculus on the less complex constituents of the model.<p>I'm probably misunderstanding what the accomplished, but it sounds like they've increased the accuracy of a neural network model of a system, notably for edge cases, by training it on complete a complete model of said system.
> the NAIL team incorporated Hamiltonian structure into neural networks<p>ML non-expert here. Is this the same as having an extra column of your input data that's a hamiltonian of the raw input? Or a kind of neuron that can compute a hamiltonian on an observation? Or something more complicated.<p>is this like a specialized 'functional region' in a biological brain? (broca's area, cerebellum)
Why not shamelessly plug my work here? I see no reason not to.<p>So, here it is: <a href="https://github.com/thesz/nn/tree/master/series" rel="nofollow">https://github.com/thesz/nn/tree/master/series</a><p>A proof of concept implementation of training neural networks process where loss function is a potential energy in Lagrangian function and I even incorporated "speed of light" - the "mass" of particle gets corrected using Lorenz multiplier m=m0/sqrt(1-v^2/c^2).<p>Everything is done using ideas from quite interesting paper about power of lazy semantics: <a href="https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.4535" rel="nofollow">https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32....</a><p>PS
Proof-of-concept here means it is grossly inefficient, mainly due to amount of symbolic computation. Yet it works. In some cases. ;)
This sounds like the opposite of what Richard Sutton seemed to advocate for in his "Bitter Lesson"[0]. I don't know nearly enough to advocate for one thing or the other, but it is fascinating to see that those approaches seem to compete as we venture into the unknown.<p>[0] <a href="http://incompleteideas.net/IncIdeas/BitterLesson.html" rel="nofollow">http://incompleteideas.net/IncIdeas/BitterLesson.html</a>
Can someone with AI knowledge please clarify - does this mean we can build 'rules based systems' into AI to synthesise intelligence from both domains?<p>If so, this would be dramatic, no?<p>If you could teach a translation service 'grammar' and then also leverage the pattern matching, could this be a 'fundamental' new idea in AI application?<p>Or is this just something specific?
So can you teach a NN an equation of motion, and if so would it execute faster than numerically integrating said equation? Could have impacts in physics simulations although the accuracy might not be as good