I spent quite a bit of time working through the Kalman filter content in Sebastian Thrun's book "Probabilistic Robotics"[1] a while back. I ended up making some notes [2] of the process that might be of interest to others if you're trying to get a grasp of everything that's going on with that process. One other person on the Internet that I know of thought they were useful, so I'll post the link here. This was part of a project I was working on to build a K-8 robotics curriculum (no, not teaching the kids Kalman filters, but <i>I</i> wanted to know how it all worked before starting to make a curriculum). The book is really good if you're wanting to make robots that can navigate uncertain environments.<p>Edit: I wish I'd had access to this article when I was going through that process. This is really well done.<p>1. <a href="http://amzn.com/0262201623" rel="nofollow">http://amzn.com/0262201623</a><p>2. <a href="https://github.com/aethertap/probabilistic-robotics" rel="nofollow">https://github.com/aethertap/probabilistic-robotics</a>
The best presentation that I've ever seen on what a Kalman filter really is comes from a SIAM Review article, "A Fresh Look at the Kalman Filter" by Jeffrey Humpherys, Preston Redd, and Jeremy West<p><a href="http://epubs.siam.org/doi/abs/10.1137/100799666" rel="nofollow">http://epubs.siam.org/doi/abs/10.1137/100799666</a><p>It sets up the discrete-time linear system and then uses a minimization principle to show what's going on. I can highly recommend it especially for people coming to Kalman filters from a math or optimization background.
I'd like to add Roger Labbe's free book "Kalman and Bayesian Filters in Python" to the mix. Along with Thrun (which he cited to point me at), this is the resource that finally drove it home for me.<p><a href="https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python" rel="nofollow">https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Pyt...</a><p>For people wondering about application, one of the popular recent uses is for sensor fusion in virtual reality.
For people familiar with Gaussian Processes, it may help to think of Kalman filters as a special case of GPs where you can construct the inverse of the covariance matrix directly, and this inverse has a tridiagonal structure.<p>Thus, a really efficient Bayesian regression algorithm.
What's even more interesting is that there is some experimental evidence to suggest that the human brain uses Kalman filters for certain things. I <i>think</i> this was one of the papers about it: <a href="http://papers.nips.cc/paper/3665-a-neural-implementation-of-the-kalman-filter.pdf" rel="nofollow">http://papers.nips.cc/paper/3665-a-neural-implementation-of-...</a>
Another nice explanation, a bit more visual.<p><a href="http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/" rel="nofollow">http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures...</a>
What many don't know about Kalman Filters (and took me a while to realize as well) is that it's just recursive least squares.<p>It is just formulated a bit differently such that incremental update complexity depends of the dimensionality of the observation, not the dimensionality of the estimated state. Depending on the dimensions this can be a lot more efficient.
Adaptive Filter Theory by Simon Haykin is the single best engineering book I used.
<a href="http://www.amazon.com/Adaptive-Filter-Theory-Simon-Haykin/dp/013267145X" rel="nofollow">http://www.amazon.com/Adaptive-Filter-Theory-Simon-Haykin/dp...</a>
Covers Kalman filters in great detail. Did anyone else us it?