What you really need to know about sample rate
By and large, discussions of sample rate are like watching paint dry. Do we really have to get into the details? After all, everyone knows that you only need to sample at twice the frequency of your signal of interest to get good results, right? If you answered “right!” to that last statement, perhaps you should read on.
While it’s true that the socalled Nyquist rate of two times the highest signal frequency component is the sample rate required to eliminate alias frequencies, the often overlooked qualifier to this rule is that the signal being digitised must be bandwidth limited at a value equal to half the Nyquist rate. What’s important is not just your frequency of interest, but all the frequencies contained in the signal you digitise and how they compare to the sample rate you’ve chosen. Let’s go back to the basics.
What’s an alias frequency?
Dictionaries define ‘alias’ as an assumed or additional name. For our purposes in data acquisition, we can more accurately define an alias as an assumed or additional frequency. To explore what that means, let’s go to the movies.
You’ve seen it hundreds of times. While watching a movie of a speeding car you look at the car’s wheels and they seem to be rotating impossibly slowly, or even rotating backwards. What you’re seeing is an alias frequency caused by a mathematical collision between the fast rotational rate of the car’s wheels and the much slower frame rate of the camera used to record the image. You subconsciously filter this anomaly out of your interpretation of the image because from other frames of reference it’s easy to determine that the car is moving forward at a high rate of speed. But what if you were viewing a movie of just the car’s wheels? In this context, if asked to determine the speed and direction the car was moving you might reach an entirely different, erroneous and embarrassing conclusion.
Extending the above example, you can think of the camera as the data acquisition system, and the rotating wheels as the signal it’s digitising. If the sample rate of the data acquisition system is too slow relative to the frequency of the signal, your measurement literally falls apart. You don’t have the convenient frames of reference of the movie. All you have is a conglomeration of changing signal amplitudes versus time. Which are the real ones and which are the aliases? Just like trying to interpret the car’s motion from the movie by watching only the wheels, it’s impossible to know, and there’s too much riding on your measurement to guess.
The mathematics of alias frequencies
We can predict an alias frequency if we know the frequencies of the input signal and the sample rate. The equation below shows that alias frequency is a function of the absolute value of the difference between the input signal frequency and the closest integer multiple of the sample rate.
f_{a}(N) = f_{in}  Nf_{s}
where:
f_{a} = alias frequency
f_{in} = input signal frequency
f_{s} = sample rate
N = an integer ≥ 0
Let’s expand upon this equation with some examples. Table 1 is a compilation of various sinusoidal input signal frequencies (f_{in}) sampled at a fixed rate of 1000 Hz and the resulting alias frequencies calculated using the equation. Constant N is an integer that assumes the value necessary to bring the term Nf_{s} closest to the input signal frequency f_{in}. For example, if the input frequency is 150 Hz and the sample rate is 75 Hz, N equals 2. N moves to 3 if the input signal frequency increases to 188 Hz, because the product of 75 and 3 (225) is closer to 188 than the product of 75 and 2 (150).
As can be seen from Table 1, any input signal frequency less than or equal to the Nyquist value of 500 Hz (half the sample rate) is reproduced accurately. Any frequency greater than this value yields an inaccurate alias frequency, even to the extent of reproducing a DC signal when the input frequency is an exact multiple of the sample rate. Clearly, all bets are off when the frequency content of the input signal exceeds one half the sample rate.
Input frequency f_{in} (Hz)  N  Alias Frequency f_{a}(N) = f_{in}  Nf_{s} 
500 Hz and less  0  f_{a}(0) = 100  (0)1000 = 100 Hz f_{a}(0) = 200  (0)1000 = 200 Hz etc. up to and including f_{in} = 500 Hz 
501  1  f_{a}(1) = 501  (1)1000 = 499 Hz 
600  1  f_{a}(1) = 600  (1)1000 = 400 Hz 
900  1  f_{a}(1) = 900  (1)1000 = 100 Hz 
1000  1  f_{a}(1) = 1000  (1)1000 = 0 Hz or DC 
1200  1  f_{a}(1) = 1200  (1)1000 = 200 Hz 
2000  2  f_{a}(1) = 2000  (2)1000 = 0 Hz or DC 
2600  3  f_{a}(1) = 2600  (3)1000 = 400 Hz 
4125  4  f_{a}(1) = 4125  (4)1000 = 125 Hz 
And what does an alias frequency look like? That’s the insidious thing. It looks just like real data. If we were to acquire data in the manner described in Table 1 when f_{in} is equal to 900 Hz, we’d see the grey 100 Hz alias waveform shown in Figure 1 instead of the black 900 Hz waveform that was actually connected to our data acquisition system. Aside from the lower frequency, can you tell the difference between the real signal and the ghost? To further complicate things, most of us don’t run around acquiring pure sine waves. The typical waveform is a complex assemblage of many frequencies, and a recorded waveform that’s aliased might look perfectly reasonable but lead you to exactly the wrong conclusions.

What’s the solution?
Circling back around to where this article began, we can satisfy the Nyquist sample rate criterion of two times the maximum signal frequency of interest only if we ensure that no other frequency components higher than this limit exist in the signal. Unless we have a high degree of confidence in the frequency content of the signal source, the only way to achieve this condition is to apply the input signal to a low pass antialiasing filter before digitising it. An indepth discussion of antialias filters is beyond the scope of this article, but their salient characteristics can be summarised as follows:
 Lowpass design;
 The corner frequency is selected to be at your maximum frequency of interest (ie, at half of the sample rate) or lower;
 Steep transitionband rolloff from the passband to the stopband.
Figure 2 is a graphical representation of the ideal antialias filter described above. Note that the ideal perpendicular shape of the transitionband is not possible in actual filter design, producing instead a rolloff with some negative slope. This reality forces a compromise in the form of either a lower corner frequency or a higher sample rate. For example, the human ear can respond to frequencies up to 20 kHz. If an antialias filter that adheres to the ideal was possible, music could be digitised using a sample rate of 40 kHz. However, the standard rate of 44.1 kHz reflects both the reality of lessthanideal filter implementations and the desire to maintain a full 20 kHz response.

Do you really need an antialias filter?
There is a crosssection of pundits in this field who insist that data acquired without an antialias filter is useless. These same people would probably insist that you wear your seatbelt just to pull your car into your garage because ‘seatbelts save lives’. The fact is that much of the data that’s acquired in daytoday measurements don’t require an antialias filter to yield perfectly accurate and actionable results, and that’s why the vast majority of data acquisition and data logger instruments don’t build one into each measurement channel. Anyone who disagrees with this statement should ask themself if a filter is needed to measure battery voltage  pure 0 Hz. If not, then we’ve at least cracked the door to compromise and we can open it further to include the measurement of other DC or near DC signals: temperature, humidity, DC current, flow, pressure, load, torque, spectrograms, GSR, smooth and skeletal muscle baths, etc. Then what about signals where the frequency content is well defined and contained: 50/60/400 Hz voltages and currents, blood pressures and flows, and even some biopotentials like ECG and EMG? We’re starting to cover a lot of measurement territory without the need for a filter. There are many more examples, but filters do have their place.
Accelerometerbased measurements are the best examples of where antialias filters are a virtual necessity. Most piezoelectric accelerometers have a frequency response in excess of 15 kHz. Although your frequency of interest may be much lower than that (eg, a 40 Hz motor rumble caused by bearing wear), you cannot ignore the fact that the sensor can and will pass higher frequencies. If you’ve read this far, you should know that just because you’re not interested in frequencies above 40 Hz doesn’t mean that you only have to sample at 80 Hz or so if the sensor can pass frequencies orders of magnitude higher.
You probably need to oversample anyway
We’ve seen thus far that a bandwidthlimited input signal that is sampled at a rate of at least twice the corner frequency of the antialias filter allows the frequency content of the input signal to be reproduced. This is great if you’re only interested in the frequency content of the system under test, and there are applications where this is the extent of the analysis. Going back to the accelerometer example, you really don’t care what the waveform looks like because it’s the frequency spectrum that conveys the presence and frequency of motor rumble. But for other applications where the waveform shape does convey information, good luck extracting that from two samples per cycle! Referring to the 100 Hz, aliased waveform of Figure 1, you can’t even remotely reconstruct that sine wave with only two points. Whenever you need accurate waveform reproduction, your only recourse is to oversample at a rate beyond Nyquist’s 2times minimum, typically 10 times or more. Assuming that you have accurately accounted for the frequency content of your signal and eliminated the potential for alias frequencies, the actual multiple you choose is determined by personal preference.
Roger W Lockhart, DATAQ Instruments
Three reasons to measure pH inline
Inline pH measurement is a reliable approach to process control, but some misunderstandings have...
Achieving predictive maintenance using an oil condition monitoring system
Monitoring the condition of lubricating oils is an important part of predictive maintenance.
Advances in temperature calibration procedures
Recent developments eliminate the need for unnecessary calibrations and speed up the time it...