I'm fond of this old RAND report from Dreyfus, which is worth skimming if you're mathematically inclined: Dynamic Programming and the Calculus of Variations, <a href="https://www.rand.org/content/dam/rand/pubs/reports/2006/R441.pdf" rel="nofollow">https://www.rand.org/content/dam/rand/pubs/reports/2006/R441...</a><p>One important takeaway is that dynamic programming in the Bellman formulation is a discrete analogue of Hamilton-Jacobi theory in how it writes down an equation for the optimal value as a function of a given endpoint rather than writing down an equation for the path as with the Euler-Lagrange equations. (You can reconstruct the path from the value function after the fact by gradient descent.) The relationship between Hamilton-Jacobi and Euler-Lagrange is the classical version of wave-particle duality. A concrete example in geometrical optics is the eikonal equation, a Hamilton-Jacobi type PDE, versus the geodesic equation, an Euler-Lagrange type ODE. Not coincidentally, one common numerical method for the eikonal equation called the fast marching method is a dynamic programming algorithm, very similar to Dijkstra's algorithm for shortest paths.<p>It should be mentioned that any "local" equation like a PDE or ODE cannot describe a globally optimal solution without strong assumptions such as convexity. In fact, satisfying the Euler-Lagrange equation isn't even sufficient for local optimality without further qualifications (Weierstrass conditions). But the Bellman dynamic programming equation, being recursive, can describe globally optimal solutions.
> Memorisation<p>It is memoization, not memorisation! Although for two weeks in algorithms class I did in fact think our prof was just pronouncing memorize in a cutesy manner.
While dynamic programming is taught in almost all algorithms classes, I think I finally grokked it after implementing it in a few practice problems. Would strongly recommend giving a few exercises listed here a shot: <a href="https://leetcode.com/tag/dynamic-programming/" rel="nofollow">https://leetcode.com/tag/dynamic-programming/</a>
Favorite examples for applications of dynamic programming? Mine are sequence alignment/BLAST in bioinformatics, but I'm sure there are many of which I am not aware in other fields.
In O.R. graduate School, Professor Gene Woolsey told us that he'd rise from the grave and stand on our desks screaming 'No!No!No!' if we ever actually used it to solve a practical problem.<p>IIRC, his complaints were about the speed of formulation, difficulty to understand and communicate the model to others, and the processing required to regenerate answers when the model changed.<p>I believe Optiant used Dynamic Programming for supply chain optimization.. So people do or did use it for practical problem solving. ..I think.
For the last part about economic optimization, I would not approach it with Dynamic Programming. As evidenced by go and many games, doing the "local" best move does not guarantee the best result in the end. If brute-forcing is untractable, the state-of-the-art is using Monte-Carlo Tree Search as evidenced by its dominance in board games, Magic the Gathering, Poker, robots competitions, RTS, etc ...
I learned it as part of speech processing, first for Dynamic Time Warping and then as the Viterbi and Baum Welch algorithms. Together with Hidden Markov Models it's a thing of beauty how it is used to model speech.<p>All of this is explained very intuitively in speech.zone<p>In the wikipedia entry there is a fun (really) explanation of why its creator called it like this.
Please note, the last name of the author referenced in the article is "Trick", not Tick.<p><a href="http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html" rel="nofollow">http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html</a>
There appears to be a missing return statement in the Fibonacci Number example in the if block. Should look like this: <a href="https://repl.it/NLIf/1" rel="nofollow">https://repl.it/NLIf/1</a>
> Tail recursion [3], a variant of traditional recursion implements memoisation, which uses memoisation very economically.<p>I don't understand this part, can anybody explain?