TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Teaching physics to neural networks removes 'chaos blindness'

143 pointsby JacobLinneyalmost 5 years ago

10 comments

vajrabumalmost 5 years ago
I believe this refers to work presented in this journal article. <a href="https:&#x2F;&#x2F;journals.aps.org&#x2F;pre&#x2F;abstract&#x2F;10.1103&#x2F;PhysRevE.101.062207" rel="nofollow">https:&#x2F;&#x2F;journals.aps.org&#x2F;pre&#x2F;abstract&#x2F;10.1103&#x2F;PhysRevE.101.0...</a><p>Abstract: Artificial neural networks are universal function approximators. They can forecast dynamics, but they may need impractically many neurons to do so, especially if the dynamics is chaotic. We use neural networks that incorporate Hamiltonian dynamics to efficiently learn phase space orbits even as nonlinear systems transition from order to chaos. We demonstrate Hamiltonian neural networks on a widely used dynamics benchmark, the Hénon-Heiles potential, and on nonperturbative dynamical billiards. We introspect to elucidate the Hamiltonian neural network forecasting.
_iyigalmost 5 years ago
Brings to mind this classic from the Jargon File:<p><a href="http:&#x2F;&#x2F;www.catb.org&#x2F;~esr&#x2F;jargon&#x2F;html&#x2F;koans.html" rel="nofollow">http:&#x2F;&#x2F;www.catb.org&#x2F;~esr&#x2F;jargon&#x2F;html&#x2F;koans.html</a><p>In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?”, asked Minsky.<p>“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.<p>“Why is the net wired randomly?”, asked Minsky.<p>“I do not want it to have any preconceptions of how to play”, Sussman said. Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.<p>“So that the room will be empty.”<p>At that moment, Sussman was enlightened.
评论 #23626150 未加载
keenmasteralmost 5 years ago
I’ve said this before, but I think that a lack of physical modeling might be the key barrier for AV technology. Human drivers have a mental model of physics that they’ve honed for 17-18 hours a day since they were born.
评论 #23620378 未加载
评论 #23619785 未加载
评论 #23621589 未加载
评论 #23619950 未加载
评论 #23619815 未加载
评论 #23619777 未加载
mywittynamealmost 5 years ago
Why do you need a neural network when you have the Hamiltonian mechanics of the system modeled? I&#x27;ve always understood Langrangian&#x2F;Hamiltonian mechanics to be methods of modeling the behavior of a system through the decomposition of the external constraints and forces acting on a body. In other words you can understand a complex model by doing some calculus on the less complex constituents of the model.<p>I&#x27;m probably misunderstanding what the accomplished, but it sounds like they&#x27;ve increased the accuracy of a neural network model of a system, notably for edge cases, by training it on complete a complete model of said system.
评论 #23621042 未加载
评论 #23620127 未加载
评论 #23620212 未加载
评论 #23625835 未加载
awinter-pyalmost 5 years ago
&gt; the NAIL team incorporated Hamiltonian structure into neural networks<p>ML non-expert here. Is this the same as having an extra column of your input data that&#x27;s a hamiltonian of the raw input? Or a kind of neuron that can compute a hamiltonian on an observation? Or something more complicated.<p>is this like a specialized &#x27;functional region&#x27; in a biological brain? (broca&#x27;s area, cerebellum)
评论 #23623725 未加载
theszalmost 5 years ago
Why not shamelessly plug my work here? I see no reason not to.<p>So, here it is: <a href="https:&#x2F;&#x2F;github.com&#x2F;thesz&#x2F;nn&#x2F;tree&#x2F;master&#x2F;series" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;thesz&#x2F;nn&#x2F;tree&#x2F;master&#x2F;series</a><p>A proof of concept implementation of training neural networks process where loss function is a potential energy in Lagrangian function and I even incorporated &quot;speed of light&quot; - the &quot;mass&quot; of particle gets corrected using Lorenz multiplier m=m0&#x2F;sqrt(1-v^2&#x2F;c^2).<p>Everything is done using ideas from quite interesting paper about power of lazy semantics: <a href="https:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;summary?doi=10.1.1.32.4535" rel="nofollow">https:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;summary?doi=10.1.1.32....</a><p>PS Proof-of-concept here means it is grossly inefficient, mainly due to amount of symbolic computation. Yet it works. In some cases. ;)
cmehdyalmost 5 years ago
This sounds like the opposite of what Richard Sutton seemed to advocate for in his &quot;Bitter Lesson&quot;[0]. I don&#x27;t know nearly enough to advocate for one thing or the other, but it is fascinating to see that those approaches seem to compete as we venture into the unknown.<p>[0] <a href="http:&#x2F;&#x2F;incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html" rel="nofollow">http:&#x2F;&#x2F;incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html</a>
评论 #23621572 未加载
jarielalmost 5 years ago
Can someone with AI knowledge please clarify - does this mean we can build &#x27;rules based systems&#x27; into AI to synthesise intelligence from both domains?<p>If so, this would be dramatic, no?<p>If you could teach a translation service &#x27;grammar&#x27; and then also leverage the pattern matching, could this be a &#x27;fundamental&#x27; new idea in AI application?<p>Or is this just something specific?
评论 #23624789 未加载
castratikronalmost 5 years ago
So can you teach a NN an equation of motion, and if so would it execute faster than numerically integrating said equation? Could have impacts in physics simulations although the accuracy might not be as good
athesynalmost 5 years ago
This sounds pretty terrifying.
评论 #23622700 未加载
评论 #23620603 未加载