I ran into this problem with a codec chip. We had to run the output at 48 khz, and the input and output clock had to be the same. Well we didn't have enough CPU to process the input at 48 khz, we only cared about 8 khz bandwidth for human speech. Boxcar averaging then decimating produced way too much aliasing, so much that the ML classifiers wouldn't work. The solution was putting a honking large FIR anti aliasing filter on the codec, like 100 taps, because -that- chip had oodles of CPU to spare as it turned out.<p>Why FIR? It had a super sharp cutoff and of course linear phase. If I'd known about Bessel filters at the time I'd have tried those out. Live and learn and ship it.
Many brilliant points. Particularly about Bessel filters instead of Butterworth. And also about lowering Fc and increasing Fs, rather than increasing order. And also about considering time domain as well as frequency domain.<p>The dense interview question I use to assess this area of knowledge: "How do you choose the stopband attenuation of a filter?" You can assess a lot from the interviewee's response. Stopband attenuation, with respective to the input signal magnitude at the stopband frequency, or start of the stopband, is the most relevant term for determining the magnitude of noise in the sampled signal. And that is the upper bound on the performance of the downstream algorithm.
The biggest one I have run into is that the bandwith doesn't need to be contiguous. If you know the the important bits of the signal are contained in certain frequency bands then you can get much lower sampling rates. Basically why things like l1 reconstruction work.
This is the most lucid explanation of aliasing I think I've ever read.<p>I mostly deal with this stuff in the realm of audio where the matter of analog vs digital rages regularly. The reason one hears "Nyquist says" so often from people defending digital audio is that the pro-analog people are imagining stair-stepped signals in which the information that is "lost" results in a noticeable degradation in sound quality. This (11-year-old) video is the gold standard for addressing this concern:<p><a href="https://www.youtube.com/watch?v=cIQ9IXSUzuM" rel="nofollow">https://www.youtube.com/watch?v=cIQ9IXSUzuM</a><p>Curiously, no one ever seems to mention the actual problems (like aliasing).
> ...there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48....<p>from "24/192 Music Downloads ...and why they make no sense" (2012) [1]<p>[1] <a href="https://people.xiph.org/~xiphmont/demo/neil-young.html" rel="nofollow">https://people.xiph.org/~xiphmont/demo/neil-young.html</a>
If you're thinking "The highest rate I need my signal to be able to replicate is X, so I should set my sampling rate to 2X," then you're wrong and this article gives several reasons why.<p>As far as I can tell, though, it doesn't mention what may be the <i>most</i> important reason (especially to the folks here at hackernews): resampling and processing.<p>This is why professional grade audio processing operates at a sample rate many multiples higher than human hearing. It's not because of the quality difference between, say, 192 and 96 kHz, but rather if you're resampling or iterating a process dozens of times at those rates, eventually artifacts will form and make their way into the range of human hearing (20 kHz).
I have referred so many people to this paper over the years. It does a great job of dispelling a lot of misunderstandings people have about Nyquist's theorem.
For anyone interested in an interactive tool for playing with the concepts noted, here’s something I put together a while back for demonstrating to colleagues: <a href="https://www.desmos.com/calculator/pma1jpcuv0" rel="nofollow">https://www.desmos.com/calculator/pma1jpcuv0</a>.
So, we first need to find the highest motion frequency a sensor might experience in the experiment, and then make sure the sampling rate it at least twice that to avoid aliasing?<p>Meaning, if an IMU sensor is mounted on a very slow moving, such as 2cm/s, RC vehicle then the sampling rate can be very slow. But if the sensor is on a fast moving drone, we need to estimate the highest frequency of the motion and make sure our sampling rate is at least double that?
It’s interesting to me how frequently people will talk about sampling performance profilers and not know about Nyquist.<p>Given a CPU runs at many GHz but SW sampling profilers run at ~1 or even maximum 10khz, it’s really hard to write software if you’re targeting processing at MHz rates.