This little project came about because I kept running into the same problem: cleanly differentiating sensor data before doing analysis. There are a ton of ways to solve this problem, I've always personally been a fan of using kalman filters for the job as its easy to get the double whammy of resampling/upsampling to a fixed consistent rate and also smoothing/outlier rejection. I wrote a little numpy only bayesian filtering/smoothing library recently (<a href="https://github.com/hugohadfield/bayesfilter/">https://github.com/hugohadfield/bayesfilter/</a>) so this felt like a fun and very useful first thing to try it out on! If people find kalmangrad useful I would be more than happy to add a few more features etc. and I would be very grateful if people sent in any bugs they spot.. Thanks!
Looks really cool.<p>I stumbled over this[1] page recently, which has a method that's apparently is better than the "traditional" Savitzky-Golay filters.<p>The idea seems to be to start with the desired frequency response, with lower frequencies close to the ideal differentiator, and higher frequencies tending smoothly to zero. This is then used to derive the filter coefficients through a set of equations.<p>The author generalizes it to irregularly sampled data near the end, so would be interesting to compare the approaches.<p>Just thought it'd throw it out there.<p>[1]: <a href="http://www.holoborodko.com/pavel/numerical-methods/numerical-derivative/smooth-low-noise-differentiators/" rel="nofollow">http://www.holoborodko.com/pavel/numerical-methods/numerical...</a>
Great work!<p>I would've needed this recently for some data analysis, to estimate the mass of an object based on position measurments. I tried calculating the 2nd derivative with a Savitzky-Golay filter, but still had some problems and ended up using a different approach (also using a Kalman filter, but with a physics-based model of my setup).<p>My main problem was that I had repeated values in my measurements (sensor had a lower, non-integer divisible sampling rate than the acquisition pipeline).
This especially made clear that np.gradient wasn't suitable, because it resulted in erratic switches between zero and the calculated derivative. Applying, np.gradient twice made the data look like random noise.<p>I will try using this library, when I next get the chance.
This is great! I've taken sort of a passive interest in this topic over the years, some papers which come to mind are [1] and [2] but I don't think I've seen a real life example of using the Kalman filter before.<p>[1] <a href="https://www.sciencedirect.com/science/article/abs/pii/0021929081900762" rel="nofollow">https://www.sciencedirect.com/science/article/abs/pii/002192...</a><p>[2] <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9241009" rel="nofollow">https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=924...</a>
Congratulations! Pardon my ignorance, as my understanding of mathematics at this level is beyond rusty, but what are the applications of this kind of functionality?
Kalman Filter helped me understand the mathematical concepts that make it a powerful tool for estimating values from noisy data<p>I made a simulation that forecasted a greenhouse's temperature and humidity to help me understand the idea. I began by going over the basics of Gaussians and normal distribution once more. After that, I used NumPy and SciPy to develop the Kalman Filter in Python. To represent the system, I defined noise matrices (Q and R), a state transition matrix (F), and a control input matrix (B).
Nice work! Just one quick question (maybe it's clear but I have not looked at it in depth). It says it computes the derivative for non-uniformly sampled time series data and the example image shows this. Is this also well behaved if the sampled measurements have noise (it is not the case of the example)? Or should one use a different approach for that? Thanks!
How did you choose the process noise covariance in your `grad` function? It doesn't seem like a single process noise covariance structure should be globally applicable across all possible functions.
That's useful. Can it generate a simple filter for later real-time use, based on the statistics of the noise? That would be useful for self-tuning controllers.
This is really nice approach. We are doing some nonlinear system id, and faced with this kinda problem (not irregular spacing but low sample rate and noisy). Definitely will check it out.<p>What’s your opinion on ensemble KF? We’d like to use that for parameter estimations. I saw unscented in your bayesfilter, but not ensemble, so I’m curious. Thanks!