Some of the older posts are very cute. I enjoyed <a href="https://greydanus.github.io/2020/12/01/scaling-down/" rel="nofollow">https://greydanus.github.io/2020/12/01/scaling-down/</a> on showing how many high-powered tricks you can do with a tiny NN of the sort that will train in seconds.
This is corresponds to Chapter 1.4 of SICM (Structure and Interpretation of Classical Mechanics).<p>Although SICM doesn't expose the underlying optimization method in the library interfaces. The path is represented as polynomial. I'd have to check if they also do gradient descent.<p><a href="https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-html/book-Z-H-10.html#%_sec_Temp_58" rel="nofollow">https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-html/bo...</a>
While the idea is obviously correct, the paper itself suffers from extremely sloppy writing.<p>They discretize the integral with a discrete sum, but then forget to discretize the variables by substituting x with x(t_i) or at least x_i, same for dot x.
They put the objective function x hat = argmin S(X) last, when it is the most important aspect.<p>In the equation where x hat must fulfill the Euler lagrange equation for all t, they butchered the application of the derivative with respect to a constant point.<p>It should look more like this:<p><a href="https://wikimedia.org/api/rest_v1/media/math/render/svg/6efe74342a2cc42ad4fc5e120bf5c46d76777f4e" rel="nofollow">https://wikimedia.org/api/rest_v1/media/math/render/svg/6efe...</a><p>You need to explicitly pass in the x(t), dot x (t) and t as arguments into the derivative. Their notation implies that you have to take the derivative with respect to a constant (not at a point) which always returns zero (a blatantly banal property) or that the function (=the laws of physics) behind x(t) varies over time (shudder).<p>Overall this was extremely unpleasant to read even though the approach is neat.
I don't know if I ever put the code online, but I did something similar a few years back for a bow deflection solver using Hooke's law and finite elements. I was planning to solve for arrow speed as well but never got around to it. I've kept the technique in my box of tools, though, because it's conceptually simpler for me to set up an optimization problem than it is an ode solver. Very cool write up.
> Nevertheless in a deterministic system you can know a future state without calculating intermediary states.<p>Exactly wrong. See the halting problem
>Some, like the double pendulum or the three-body problem, are deterministic but chaotic. In other words, their dynamics are predictable but we can’t know their state at some time in the future without simulating all the intervening states.<p>Literal nonsense. Everything in the second sentence is false.<p>Deterministic means that the state at some point in time fixes the state at all future points in time. Nevertheless in a deterministic system you can know a future state without calculating intermediary states.<p>Chaotic means that future states are discontinuous in regards to the initial state. Nevertheless a chaotic system can be known at future states without calculating intermediary states, you can even have an <i>analytic</i> solution to a chaotic system. Furthermore chaotic can mean that you <i>can't</i> calculate future states from initial states. Numerical ODE solvers in particularly have errors which grow exponential in time. So simulating intermediate states does not give you the solution to the problem.