Actually signal processing is already used for most machine learning of audio signals, including speech recognition. The reason is that ML algorithms, including deep learning has a hard time learning the information you can get from a discrete Fourier transform.<p>Audio data in time domain are just too noisy for most machine learning, and doing some signal processing as a preprocessor step often helps a lot.<p>Here it seems like he works with non-audio data, where this is less common.
This is an awesome topic, but I'm somewhat annoyed they didn't dive into what kind of DSP and instead turned the article into an advertisement.<p>Does anyone have any good further reading on the topic? (Books, articles, classes, anything really.)
The Kalman filter is basically an ML algo. The key here is to implement already known linear optimization approximation versions of it in common libraries.
It always comes down to representation. If you can use a deterministic, efficient algorithm to represent the data in a more amenable manner, then the ML system will have a much easier time "making sense" of the patterns inherent in the data compared to a system that has to learn some abstract transformation from raw data to useful representations.
My concern with a lot of signal processing techniques used in ml is that sometimes they presuppose things that may not be true.<p>That is, signal processing had Nyquist's rates. And typically knows there is an underlying signal. Does ml have either?