SAW Filters in Digital Communications

Colin Campbell , in Surface Acoustic Wave Devices and their Signal Processing Applications, 1989

17.3.2 Nyquist Bandwidth Theorem

Now consider the intermingled sinc function [i.e., (sin X)/X ] impulse responses of the ideal lowpass filter when it is subjected to a sequence of uniformly spaced delta-function impulse voltages of random polarity, as sketched in Fig. 17.8. From this example, demonstrated for three input impulses uniformly spaced at intervals T s, it can be deduced that it should be possible to detect any one of the sinc function responses without interference from any of the others. (When one sinc function response is maximum, all others have zero amplitude.) This is the result implicit in Nyquist's Bandwidth Theorem [1] for impulses applied at the synchronous rate f s = 1/T s = 2f N and represents the condition for complete freedom from intersymbol interference (ISI). Note that while the above considerations have tacitly assumed zero group delay through the filter for ease of illustration, the inclusion of a linear group delay term τ would merely cause each of the sinc function impulse responses to experience the same delay, allowing the maintenance of ISI-free transmission under ideal conditions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121573454500217

Signal Sampling and Quantization

Lizhe Tan , Jean Jiang , in Digital Signal Processing (Third Edition), 2019

2.6 Problems

2.1. Given an analog signal

x t = 5 cos 2 π × 1500 t , for t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the original signal;

(b)

sketch the spectrum of the sampled signal from 0 up to 20   kHz.

2.2. Given an analog signal

x t = 5 cos 2 π × 2500 t + 2 cos 2 π × 3200 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4  kHz is used to filter the sampled signal in order to recover the original signal.

2.3. Given an analog signal

x t = 3 cos 2 π × 1500 t + 2 cos 2 π × 2200 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4   kHz is used to filter the sampled signal in order to recover the original signal.

2.4. Given an analog signal

x t = 3 cos 2 π × 1500 t + 2 cos 2 π × 4200 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4   kHz is used to filter the sampled signal in order to recover the original signal.

2.5. Given an analog signal

x t = 5 cos 2 π × 2500 t + 2 cos 2 π × 4500 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4   kHz is used to filter the sampled signal in order to recover the original signal;

(c)

determine the frequency/frequencies of aliasing noise.

2.6. Assuming a continuous signal is given as

x t = 10 cos 2 π × 5500 t + 5 sin 2 π × 7500 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4   kHz is used to filter the sampled signal in order to recover the original signal;

(c)

determine the frequency/frequencies of aliasing noise.

2.7. Assuming a continuous signal is given as

x t = 8 cos 2 π × 5000 t + 5 sin 2 π × 7000 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4   kHz is used to filter the sampled signal in order to recover the original signal;

(c)

determine the frequency/frequencies of aliasing noise.

2.8. Assuming a continuous signal is given as

x t = 10 cos 2 π × 5000 t + 5 sin 2 π × 7500 t , f o r t 0 ,

sampled at a rate of 8000   Hz,

(a)

sketch the spectrum of the sampled signal up to 20   kHz;

(b)

sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4   kHz is used to filter the sampled signal in order to recover the original signal;

(c)

determine the frequency/frequencies of aliasing noise.

2.9. Given the following second-order anti-aliasing lowpass filter (Fig. 2.37) which is a Butterworth type, determine the values of circuit elements if we want the filter to have a cutoff frequency of 1000   Hz.

Fig. 2.37

Fig. 2.37. Filter Circuit in Problem 2.9.

2.10. From Problem 2.9, determine the percentage of aliasing level at the frequency of 500   Hz, assuming that the sampling rate is 4000   Hz.

2.11. Given the following second-order anti-aliasing lowpass filter (Fig. 2.38) which is a Butterworth type, determine the values of circuit elements if we want the filter to have a cutoff frequency of 800   Hz.

Fig. 2.38

Fig. 2.38. Filter circuit in Problem 2.11.

2.12. From Problem 2.11, determine the percentage of aliasing level at the frequency of 400   Hz, assuming that the sampling rate is 4000   Hz.

2.13. Given a DSP system in which a sampling rate of 8000   Hz is used and the anti-aliasing filter is a second-order Butterworth lowpass filter with a cutoff frequency of 3.2   kHz, determine

(a)

the percentage of aliasing level at the cutoff frequency;

(b)

the percentage of aliasing level at the frequency of 1000   Hz.

2.14. Given a DSP system in which a sampling rate of 8000   Hz is used and the anti-aliasing filter is a Butterworth lowpass filter with a cutoff frequency 3.2   kHz, determine the order of the Butterworth lowpass filter for the percentage of aliasing level at the cutoff frequency required to be less than 10%.

2.15. Given a DSP system in which a sampling rate of 8000   Hz is used and the anti-aliasing filter is a second-order Butterworth lowpass filter with a cutoff frequency of 3.1   kHz, determine

(a)

the percentage of aliasing level at the cutoff frequency;

(b)

the percentage of aliasing level at the frequency of 900   Hz.

2.16. Given a DSP system in which a sampling rate of 8000   Hz is used and the anti-aliasing filter is a Butterworth lowpass filter with a cutoff frequency 3.1   kHz, determine the order of the Butterworth lowpass filter for the percentage of aliasing level at the cutoff frequency required to be less than 10%.

2.17. Given a DSP system (Fig. 2.39) with a sampling rate of 8000   Hz and assuming that the hold circuit is used after DAC, determine

Fig. 2.39

Fig. 2.39. Analog signal reconstruction in Problem 2.18.

(a)

the percentage of distortion at the frequency of 3200   Hz;

(b)

the percentage of distortion at the frequency of 1500   Hz.

2.18. A DSP system is given with the following specifications:

Design requirements:

Sampling rate 20,000   Hz;

Maximum allowable gain variation from 0 to 4000   Hz   =   2   dB;

40   dB rejection at the frequency of 16,000   Hz; and

Butterworth filter assumed.

Determine the cutoff frequency and order for the anti-image filter.

2.19. Given a DSP system with a sampling rate of 8000   Hz and assuming that the hold circuit is used after DAC, determine

(a)

the percentage of distortion at the frequency of 3000   Hz;

(b)

the percentage of distortion at the frequency of 1600   Hz.

2.20. A DSP system (Fig. 2.40) is given with the following specifications:

Fig. 2.40

Fig. 2.40. Analog signal reconstruction in Problem 2.20.

Design requirements:

Sampling rate 22,000   Hz;

Maximum allowable gain variation from 0 to 4000   Hz   =   2   dB;

40   dB rejection at the frequency of 18,000   Hz; and

Butterworth filter assumed

Determine the cutoff frequency and order for the anti-image filter.

2.21. Given the 2-bit flash ADC unit with an analog sample-and-hold voltage of 2   V shown in Fig. 2.41, determine the output bits.

Fig. 2.41

Fig. 2.41. A 2-Bit flash ADC in Problem 2.21.

2.22. Given the R-2R DAC unit with a 2-bit value as b 1 b 0  =   01 shown in Fig. 2.42, determine the converted voltage.

Fig. 2.42

Fig. 2.42. A 2-Bit R-2R DAC in Problem 2.22.

2.23. Given the 2-bit flash ADC unit with an analog sample-and-hold voltage of 3.5   V shown in Fig. 2.41, determine the output bits.

2.24. Given the R-2R DAC unit with 2-bit values as b 1 b 0  =   11 and b 1 b 0  =   10, respectively, and shown in Fig. 2.42, determine the converted voltages.

2.25. Assuming that a 4-bit ADC channel accepts analog input ranging from 0 to 5   V, determine the following:

(a)

Number of quantization levels;

(b)

Step size of quantizer or resolution;

(c)

Quantization level when the analog voltage is 3.2   V;

(d)

Binary code produced by the ADC;

(e)

Quantization error.

2.26. Assuming that a 5-bit ADC channel accepts analog input ranging from 0 to 4   V, determine the following:

(a)

Number of quantization levels;

(b)

Step size of quantizer or resolution;

(c)

Quantization level when the analog voltage is 1.2   V;

(d)

Binary code produced by the ADC;

(e)

Quantization error.

2.27. Assuming that a 3-bit ADC channel accepts analog input ranging from −2.5 to 2.5   V, determine the following:

(a)

Number of quantization levels;

(b)

Step size of quantizer or resolution;

(c)

Quantization level when the analog voltage is −1.2   V;

(d)

Binary code produced by the ADC;

(e)

Quantization error.

2.28. Assuming that an 8-bit ADC channel accepts analog input ranging from −2.5 to 2.5   V, determine the following:

(a)

Number of quantization levels;

(b)

Step size of quantizer or resolution;

(c)

Quantization level when the analog voltage is 1.5   V;

(d)

Binary code produced by the ADC;

(e)

Quantization error.

2.29. If the analog signal to be quantized is a sinusoidal waveform, that is,

x(t)   =   9.5   sin(2000   × πt), and if a bipolar quantizer uses 6 bits, determine

(a)

Number of quantization levels;

(b)

Quantization step size or resolution, Δ, assuming the signal range is from −10 to 10   V;

(c)

The signal power to quantization noise power ratio.

2.30. For a speech signal, if the ratio of the RMS value over the absolute maximum value of the signal is given, that is, x rms x max = 0.25 and the ADC bipolar quantizer uses 6 bits, determine

(a)

Number of quantization levels;

(b)

Quantization step size or resolution, Δ, if the signal range is 5   V;

(c)

The signal power-to-quantization noise power ratio.

Computer problems with MATLAB: Use the MATLAB programs in Section 2.5 to solve the following problems.

2.31. Given a sinusoidal waveform of 100   Hz,

x t = 4.5 sin 2 π × 100 t , f o r t 0 ,

sample it at 8000 samples per second and

(a)

Write a MATLAB program to quantize x(t) using a 6-bits bipolar quantizer to obtain the quantized signal x q , assuming that the signal range to be from −5 to 5   V;

(b)

Plot the original signal and quantized signal;

(c)

Calculate the SNR due to quantization using the MATLAB program.

2.32. Given a signal waveform,

x t = 3.25 sin 2 π × 50 t + 1.25 cos 2 π × 100 t + π / 4 , f o r t 0 ,

sample it at 8000 samples per second and

(a)

Write a MATLAB program to quantize x(t) using a 6-bits bipolar quantizer to obtain the quantized signal x q , assuming that the signal range to be from −5 to 5   V;

(b)

Plot the original signal and quantized signal;

(c)

Calculate the SNR due to quantization using the MATLAB program.

2.33. Given a speech signal sampled at 8000   Hz, as shown in Example 2.14,

(a)

Write a MATLAB program to quantize x(t) using a 6-bits bipolar quantizer to obtain the quantized signal x q , assuming that the signal range is from −5 to 5   V;

(b)

Plot the original speech waveform, quantized speech, and quantization error;

(c)

Calculate the SNR due to quantization using the MATLAB program.

MATLAB Projects

2.34. Performance evaluation of speech quantization:

Given an original speech segment "speech.dat" sampled at 8000   Hz with each sample encoded in 16 bits, use Programs 2.3–2.5 and modify Program 2.2 to quantize the speech segment using 3–15 bits, respectively. The SNR in dB must be measured for each quantization. MATLAB function: "sound(x/max(abs(x)),fs)" can be used to evaluate sound quality, where "x" is the speech segment while "fs" is the sampling rate of 8000   Hz. In this project, create a plot of the measured SNR (dB) versus the number of bits and discuss the effect of the sound quality. For comparisons, plot the original speech and the quantized one using 3, 8, and 15 bits.

2.35. Performance evaluation of seismic data quantization:

The seismic signal, a measurement of the acceleration of ground motion, is required for applications in the study of geophysics. The seismic signal ("seismic.dat" provided by the USGS Albuquerque Seismological Laboratory) has a sampling rate of 15   Hz with 6700 data samples, and each sample is encoded using 32 bits. Quantizing each 32-bit sample down to the lower number of bits per sample can reduce the memory storage requirement with the reduced signal quality. Use Programs 2.3–2.5 and modify Program 2.2 to quantize the seismic data using 13, 15, 17,…, 31 bits. The SNR in dB must be measured for each quantization. Create a plot of the measured SNR (dB) versus the number of bits. For comparison, plot the seismic data and the quantized one using 13, 18, 25, and 31 bits.

Advanced Problems

2.36–2.38. Given the following sampling system (see Fig. 2.43), x s (t)   = x(t)p(t)

Fig. 2.43

Fig. 2.43. Analog signal reconstruction in Problems 2.36–2.38.

2.36. If the pulse train used is depicted in Fig. 2.44

Fig. 2.44

Fig. 2.44. Pulse train in Problem 2.36.

(a)

Determine the Fourier series expansion for p t = k = a k e jkw 0 t ;

(b)

Determine X s (f) in terms of X s (f) using Fourier transform, that is,

X s f = FT x s t = FT x t p t ;

(c)

Determine spectral distortion referring to X s (f) for −  f s /2   < f  < f s /2.

2.37. If the pulse train used is depicted in Fig. 2.45

Fig. 2.45

Fig. 2.45. Pulse train in Problem 2.37.

(a)

Determine the Fourier series expansion for p t = k = a k e jkw 0 t ;

(b)

Determine X s (f) in terms of X(f) using Fourier transform, that is,

X s (f)   = FT{x s (t)}   = FT{x(t)p(t)};

(c)

Determine spectral distortion referring to X(f) for −  f/2   < f  < f s /2.

2.38. If the pulse train used is depicted in Fig. 2.46

Fig. 2.46

Fig. 2.46. Pulse train in Problem 2.38.

(a)

Determine the Fourier series expansion for p t = k = a k e jkw 0 t ;

(b)

Determine X s (f) in terms of X(f) using Fourier transform, that

X s (f)   = FT{x s (t)}   = FT{x(t)p(t)};

(c)

Determine spectral distortion referring to X(f) for −  f s /2   < f  < f s /2.

2.39. In Fig. 2.16, a Chebyshev lowpass filter is chosen to serve as anti-aliasing lowpass filter, where the magnitude frequency response of the Chebyshev filter with an order n is given by

H f = 1 1 + ɛ 2 C n 2 f / f c ,

where ɛ is the absolute ripple, and

C n x = cos n cos 1 x x < 1 cosh n cosh 1 x x > 1 and cosh 1 x = ln x + x 2 1 .

(a)

Derive the formula for the aliasing level; and

(b)

When the sampling frequency is 8   kHz, a cutoff frequency is 3.4   kHz, ripple is 1   dB, and order equals n  =   4, determine the aliasing level at frequency of 1   kHz.

2.40. Given the following signal

x t = i = 1 N A i cos ω i t + ϕ i

with the signals ranging from i = 1 N A i to i = 1 N A i , determine the signal to quantization noise power ratio using m bits.

2.41. Given the following modulated signal

x t = A 1 cos ω t t + ϕ 1 × A 2 cos ω 2 t + ϕ 2

with the signal ranging from −  A 1 A 2 to A 1 A 2, determine the signal to quantization noise power ratio using m bits.

2.42. Assume that truncation of the continuous signal x(n) in Problem 2.40 is defined as:

x q n = x n + e q n ,

where −   Δ   < e q (n)     0 and Δ = 2 i = 1 N A i / 2 m . The quantized noise has the distribution given in Fig. 2.47, determine the SNR using m bits.

Fig. 2.47

Fig. 2.47. The truncated error distribution in Problem 2.42.

2.43. Assume that truncation of the continuous signal x(n) in Problem 2.41 is defined as

x q n = x n + e q n ,

where 0   e q (n)   <   Δ and Δ   =   2 A 1 A 2/2 m . The quantized noise has the distribution given in Fig. 2.48, determine the SNR using m bits.

Fig. 2.48

Fig. 2.48. The truncated error distribution in Problem 2.43.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128150719000026

Probability, Random Variables, and Random Processes

Ali Grami , in Introduction to Digital Communications, 2016

4.3.10 Sampling Theorem of Random Signals

Sampling is an indispensable signal-processing tool to bridge between continuous-time signals and discrete-time signals and, in turn, paves the way for digital signal processing and transmission. A wide-sense stationary process X(t) is called band-limited if its power spectral density vanishes for frequencies beyond a specific frequency W and it has finite power, as shown below:

(4.59) S X f = 0 , f > W R X 0 <

X(t) can be reconstructed from the sequence of its samples taken at a minimum rate of twice the highest frequency component (i.e., 2 W ) . The sampled version of the process equals the original in the mean-square sense for all time t, and it is as follows:

(4.60) X ˆ t = n = X n 2 W sinc 2 W t n , < t <

where X n 2 W is the random variable obtained by sampling the process X(t) at t = n 2 W . It can be shown that the mean-square error between the original process and the sampled version is zero, i.e., we have the following:

(4.61) e 2 ¯ = E X t X ˆ t 2 = 0

Suppose we have S X f 0 , for f > W , i.e., X(t) is not band-limited to W Hz. If X(t ) is applied to an ideal lowpass filter whose bandwidth is W Hz and the resulting output Y(t) is sampled at a rate equal to 2W, then the resulting mean-square error between the original (unfiltered) signal X(t) and the sampled version Ŷ(t) is as follows:

(4.62) e 2 ¯ = E X t Y ˆ t 2 = 2 W S X f d f 0

Example 4.43

The power spectral density of a wide-sense stationary random process is as follows:

S X f = 10 12 f + 10 6 , f 10 6 0 , otherwise

The signal is passed through an ideal lowpass filter whose bandwidth is W Hz, and its output is then sampled at 2W Hz. Determine the mean-square value of the sampling error for a) W = 1 MHz and b) W = 500 kHz.

Solution

This problem highlights the relationship between the bandwidth of a random signal and the sampling rate.

(a)

Using (4.61), we have e 2 ¯ = 0 , as the sampling rate is twice the highest frequency component.

(b)

Since the sampling rate is not high enough, we must use (4.62) to find the mean-square error:

e 2 ¯ = 2 10 6 2 10 6 10 12 f + 10 6 d f = 0.25 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124076822000041

Basic Linear Filtering with Application to Image Enhancement

Alan C. Bovik , Scott T. Acton , in The Essential Guide to Image Processing, 2009

10.3.2 Ideal Lowpass Filter

As an alternative to the average filter, a filter may be designed explicitly with no sidelobes by forcing the frequency response to be zero outside of a given radial cutoff frequency Ω c :

10.26 H ( U , V ) = { 1 ; U 2 + V 2 Ω c 0 ; else

or outside of a rectangle defined by cutoff frequencies along the U- and V-axes:

10.27 H ( U , V ) = { 1 ; | U | U c and | V | V c 0 ; else .

Such a filter is called an ideal lowpass filter (ideal LPF) because of its idealized characteristic. We will study (10.27) rather than (10.26) since it is easier to describe the impulse response of the filter. If the region of frequencies passed by (10.26) is square, then there is little practical difference in the two filters if U c = V c = Ω c .

The impulse response of the ideal lowpass filter (10.26) is given explicitly by

10.28 h ( m , n ) = U c V c sin c ( 2 π U c m ) · sin c ( 2 π V c n ) ,

where sinc ( x ) = sin x x . Despite the seemingly "ideal" nature of this filter, it has some major drawbacks. First, it cannot be implemented exactly as a linear convolution, since the impulse response (10.28) is infinite in extent (it never decays to zero). Therefore, it must be approximated. One way is to simply truncate the impulse response, which in image processing applications is often satisfactory. However, this has the effect of introducing ripple near the frequency discontinuity, producing unwanted noise leakage. The introduced ripple is a manifestation of the well-known Gibbs phenomena studied in standard signal processing texts [1]. The ripple can be reduced by using a tapered truncation of the impulse response, e.g., by multiplying (10.28) with a Hamming window [1]. If the response is truncated to image size M × N , , then the ripple will be restricted to the vicinity of the locus of cutoff frequencies, which may make little difference in the filter performance. Alternately, the ideal LPF can be approximated by a Butterworth filter or other ideal LPF approximating function. The Butterworth filter has frequency response [2]

10.29 H ( U , V ) = 1 1 + ( U 2 + V 2 Ω C ) 2 K

and, in principle, can be made to agree with the ideal LPF with arbitrary precision by taking the filter order K large enough. However, (10.29) also has an infinite-extent impulse response with no known closed-form solution. Hence, to be implemented it must also be spatially truncated (approximated), which reduces the approximation effectiveness of the filter [2].

It should be noted that if a filter impulse response is truncated, then it should also be slightly modified by adding a constant level to each coefficient. The constant should be selected such that the filter coefficients sum to unity. This is commonly done since it is generally desirable that the response of the filter to the (0, 0) spatial frequency be unity, and since for any filter

10.30 H ( 0 , 0 ) = p = q = h ( p , q ) .

The second major drawback of the ideal LPF is the phenomena known as ringing. This term arises from the characteristic response of the ideal LPF to highly concentrated bright spots in an image. Such spots are impulse-like, and so the local response has the appearance of the impulse response of the filter. For the circularly-symmetric ideal LPF in (10.26), the response consists of a blurred version of the impulse surrounded by sinc-like spatial sidelobes, which have the appearances of rings surrounding the main lobe.

In practical application, the ringing phenomena create more of a problem because of the edge response of the ideal LPF. In the simplistic case, the image consists of a single one-dimensional step edge: s ( m , n ) = s ( n ) = 1 for n 0 and s ( n ) = 0 , otherwise. Figure 10.4 depicts the response of the ideal LPF with impulse response (10.28) to the step edge. The step response of the ideal LPF oscillates (rings) because the sinc function oscillates about the zero level. In the convolution sum, the impulse response alternately makes positive and negative contribution, creating overshoots and undershoots in the vicinity of the edge profile. Most digital images contain numerous step-like light-to-dark or dark-to-light image transitions; hence, application of the ideal LPF will tend to contribute considerable ringing artifacts to images. Since edges contain much of the significant information about the image, and since the eye tends to be sensitive to ringing artifacts, often the ideal LPF and its derivatives are not a good choice for image smoothing. However, if it is desired to strictly bandlimit the image as closely as possible, then the ideal LPF is a necessary choice.

FIGURE 10.4. Depiction of edge ringing. The step edge is shown as a continuous curve, while the linear convolution response of the ideal LPF (10.28) is shown as a dotted curve.

Once an impulse response for an approximation to the ideal LPF has been decided, then the usual approach to implementation again entails zero-padding both the image and the impulse response, using the periodic extension, taking the product of their DFTs (using an FFT algorithm), and defining the result as the inverse DFT. This was done in the example of Fig. 10.5, which depicts application of the ideal LPF using two cutoff frequencies. This was implemented using a truncated ideal LPF without any special windowing. The dominant characteristic of the filtered images is the ringing, manifested as a strong mottling in both images. A very strong oriented ringing can be easily seen near the upper and lower borders of the image.

FIGURE 10.5. Example of application of ideal lowpass filter to noisy image in Fig. 10.3(b). Image is filtered using radial frequency cutoff of (a) 30.72 cycles/image and (b) 17.07 cycles/image. These cutoff frequencies are the same as the half-peak cutoff frequencies used in Fig. 10.3.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012374457900010X

Signals, Systems, and Spectral Analysis

Ali Grami , in Introduction to Digital Communications, 2016

3.10.1 Ideal Filters

An ideal filter exactly passes signals at certain sets of frequencies and completely rejects the rest. In order to avoid distortion in the filtering process, a filter should ideally have a flat magnitude characteristic and a linear phase characteristic over the passband of the filter (the frequency range of interest). The most common types of filters are the low-pass filter (LPF), high-pass filter (HPF), band-pass filter (BPF), and band-stop filter (BSF), which pass low, high, intermediate, and all but intermediate frequencies, respectively. Figure 3.37 shows the magnitude and phase responses of ideal LPF, HPF, BPF, and BSF. Most communication filters are of LPF and BPF types.

Figure 3.37. Magnitude and phases responses of ideal filters: (a) lowpass filter (LPF), (b) bandpass filter (BPF), (c) highpass filter (HPF), and (d) bandstop filter (BSF).

For a physically-realizable filter, its impulse response h(t) must be causal. In the frequency domain, this condition is equivalent to the Paley-Wiener criterion , which states that the necessary and sufficient condition for | H f | to be the magnitude response of a causal (realizable) filter is as follows:

(3.85) - ln H f 1 + f 2 d f <

This condition is not met if | H f | = 0 over a finite band of frequencies (i.e., a filter cannot perfectly reject any band of frequencies). However, if | H f | = 0 at a single frequency (or a set of discrete frequencies), the condition may be met. According to this criterion, ideal filters are clearly noncausal. Some continuous magnitude responses, such as H f = exp - | f | , are not allowable magnitude responses for causal filters because the integral in (3.85) does not give a finite result.

Example 3.50

Determine the frequency responses and impulse responses of an ideal LPF and BPF, and discuss the causality issue.

Solution

The frequency response of an ideal LPF with a bandwidth of W Hz and its impulse response are respectively defined as follows:

H L P F f = u f + W - u f - W e x p - j 2 π f t 0

and

h L P F t = 2 W s i n c 2 W t - t 0

The impulse response for t 0 = 0 is shown in Figure 3.38a. As reflected in (3.85), | H f | does not meet the Paley-Wiener criterion, and h(t) is thus not causal. One practical approach to filter design is to cut off the tail of h(t) for t < 0 . In order to have the truncated version of h(t) as close as possible to the ideal impulse response, the delay t 0 must be as large as possible. Theoretically, a delay of infinite duration is needed to realize the ideal characteristics. But practically, a delay of just a few times 1 2 W can make the truncated version reasonably close to the ideal one. As an example, for an audio LPF filter with a bandwidth of 20 kHz, a delay of 0.1 milliseconds would be a reasonable choice.

Figure 3.38. Signals in Example 3.50.

The frequency response of an ideal BPF, with a bandwidth of 2W Hz and centered around the frequency f c , as well as its impulse response are respectively defined as follows:

H B P F f = u f + W + f c - u f - W + f c + u f + W - f c - u f - W - f c e - j 2 π f t 0

and

h B P F t = 4 W s i n c 2 W t - t 0 cos 2 π f c t - t 0

The impulse response for t 0 = 0 is shown in Figure 3.38b. If f c 2 W , it is reasonable to view h BPF (t) as the slowly-varying envelope sinc(2Wt) modulating the high-frequency oscillatory signal cos(2πf c t) and shifted to the right by t 0.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012407682200003X

QUANTIZATION IN DIGITAL FILTERS

Wing-Kuen Ling , in Nonlinear Digital Filters, 2007

Theorem 3.1

Assume that

Q ( μ ( n ) ) μ T ( n ) p , H ( ω ) = { 1 | ω | < π R 0 o t h e r w i s e , ω 2 M 1 ) π R and ( 1 2 k = 1 N a a k 2 ) 2 π R π R | m = 2 M p m U 2 m 1 ( ω ) | 2 d ω > π R π R | m = 2 M ( p m a m U 2 m 1 ( ω ) 2 2 m ) | 2 d ω .

Then the SNR of the coded system shown in Figure 3.3b will be higher than that of the conventional system shown in Figure 3.3a.

Proof:

If Q (μ (n)) ≈ μ T (n)p, then for the conventional system shown in Figure 3.3a, we have:

(3.3) Y 1 ( ω ) = H ( ω ) S 1 ( ω ) H ( ω ) m = 1 M p m U 2 m 1 ( ω ) .

If we regard the first order term (m = 1) in the signal band as the signal component and all higher order terms (m ≥ 2) in the signal band as the quantization noise, then the SNR can be estimated as follows:

(3.4) S N R 10 log 10 π R π R | H ( ω ) p 1 U ( ω ) | 2 d ω π R π R | H ( ω ) m = 2 P m M U 2 m 1 ( ω ) | 2 d ω .

Since H (ω) is an ideal lowpass filter, then we have:

(3.5) S N R 10 log 10 p 1 2 π R π R | U ( ω ) | 2 d ω π R π R | p m U 2 m 1 ( ω ) | 2 d ω .

Now consider the coded system shown in Figure 3.3b.

Denote the input to the quantizer as u ˜ ( n ) and U ˜ 2 m 1 ( ω ) U ˜ ( ω ) * * U ˜ ( ω ) , where there are 2m – 1 terms in U ˜ 2 m 1 ( ω ) . Since U ˜ ( ω ) = U ( ω ) * A ˜ ( ω ) 2 , we have U ˜ 2 m 1 ( ω ) = U 2 m 1 ( ω ) * A ˜ 2 m 1 ( ω ) 2 2 m 1 . As we assume that Q (μ(n))≈μ T (n)p, so we have S 2 ( ω ) m = 1 M ( p m U 2 m 1 ( ω ) * A ˜ 2 m 1 ( ω ) 2 2 m 1 ) and

(3.6) Y 2 ( ω ) m = 1 M ( p m U 2 m 1 ( ω ) * A ˜ 2 m 1 ( ω ) 2 2 m 1 )

As a result,

(3.7) S N R 10 log 10 π R π R | H ( ω ) p 1 U ( ω ) * A ˜ 2 ( ω ) 2 2 | 2 d ω π R π R | H ( ω ) m = 2 m p m U 2 m 1 ( ω ) * A ˜ 2 m ( ω ) 2 2 m | 2 d ω .

Since ω 0 ( 2 M 1 ) π R and u (n) is bandlimited, terms in U 2 m 1 ( ω ) * A ˜ 2 m ( ω ) for m = 1, 2,…, M do not overlap each other in the frequency spectrum. As H (ω) is an ideal lowpass filter, eqn (3.7) can be further simplified as:

(3.8) S N R 10 log 10 p 1 2 4 ( k = 1 N a a k 2 ) 2 π R π R | U ( ω ) | 2 d ω π R π R | H ( ω ) m = 2 m p m U 2 m 1 ( ω ) * A ˜ 2 m ( ω ) 2 2 m | 2 d ω .

Denote a vector a ≡ [a 1 · · · aNa ] T and a ˜ m [ f l i p l r ( a T ) 0 a T 0 ( 4 m 2 ) N a ] T , where fliplr (a T) ≡ [aNa a 1] and 0 (4 m –2) Na denotes a zero row vector with length (4m – 2)Na .

Denote the discrete Fourier transform of a ˜ m as a ˆ m and its k th element as a ˆ m ( k ) .

Denote a vector a ¯ m with its k th element a ¯ m ( k ) as a ¯ m ( k ) ( a ˆ m ( k ) ) 2 m

Denote the inverse discrete Fourier transform of a ¯ m as a m and its k th element as a m ( k ) . Then a m ( 2 m N a + 1 ) = a m . Since ω 0 ( 2 M 1 ) π R , u ( n ) is bandlimited and H (ω) is an ideal lowpass filter,

(3.9) ( 1 2 k = 1 N a a k 2 ) 2 π R π R | m = 2 M P m U 2 m 1 ( ω ) | 2 d ω > π R π R | m = 2 M ( P m a m | U 2 m 1 ( ω ) 2 2 m ) | 2 d ω

,

If ( 1 2 a k 2 ) 2 π R π R | p m U 2 m 1 ( ω ) | 2 d ω > π R π R | ( p m U 2 m 1 ( ω ) 2 2 m ) | 2 d ω , then the SNR of the coded system will be larger than that of the conventional system.

This completes the proof.

If the periodic code only consists of a single coefficient, then the coded system will become a modulated system. Denote the factorial operator as!, then we have the following corollary:

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012372536350003X

Video Sampling and Interpolation

Eric Dubois , in The Essential Guide to Video Processing, 2009

2.3 SAMPLING AND RECONSTRUCTION OF CONTINUOUS TIME-VARYING IMAGERY

The process for sampling a time-varying image can be approximated by the model shown in Fig. 2.4. In this model, the light arriving on the sensor is collected and weighted in space and time by the sensor aperture a(s) to give the output

FIGURE 2.4. System for sampling a time-varying image.

(2.8) f c a ( s ) = 3 f c ( s + s ) a ( s ) d s ,

where it is assumed, here, that the sensor aperture is space and time invariant. The resulting signal fca (s) is then sampled in an ideal fashion on the sampling structure Ψ,

(2.9) f ( s ) = f c a ( s ) , s Ψ

By definingha(s) = a(−s), it is seen that the aperture weighting is a linear shift-invariant filtering operation, that is, the convolution of fc (s) with ha (s)

(2.10) f c a ( s ) = 3 f c ( s - s ) h a ( s ) d s

Thus, if fc(s) has a Fourier transform Fc(u), then Fca(u) = Fc(u)Ha(u), where Ha(u) is the Fourier transform of the aperture impulse response. In typical acquisition systems, the sampling aperture can be modeled as a rectangular or Gaussian function.

If the sampling structure is a lattice Λ, then the effect in the frequency domain of the sampling operation is given by [1]

(2.11) F ( u ) = 1 d ( Λ ) k Λ * F c a ( u + k ) ,

in other words, the continuous signal spectrum Fca (u) is replicated on the points of the reciprocal lattice. The terms in the sum of Eq. (2.11), other than for k = 0, are referred to as spectral repeats. There are two main consequences of the sampling process. The first is that these spectral repeats, if not removed by the display/viewer system, may be visible in the form of flicker, line structure, or dot patterns. The second is that, if the regions of support of Fca (u) and Fca (u+k) have nonzero intersection for some values k Λ * , we have aliasing; a frequency ua in this intersection can represent both the frequencies ua and ua –k in the original signal. Thus, to avoid aliasing, the spectrum Fca (u) should be confined to a unit cell of Λ*; this can be accomplished to some extent by the sampling aperture ha . Aliasing is particularly problematic because once introduced it is difficult to remove, since there is more than one acceptable interpretation of the observed data. Aliasing is a familiar effect that tends to be localized to those regions of the image with high frequency details. It can be seen as moiré patterns in such periodic-like patterns as fishnets and venetian blinds, and as staircase-like effects on high-contrast oblique edges. The aliasing is particularly visible and annoying when these patterns are moving. Aliasing is controlled by selecting a sufficiently dense sampling structure and through the prefiltering effect of the sampling aperture.

If the support of Fca (u) is confined to a unit cell P * of Λ*, then it is possible to reconstruct fca exactly from the samples. In this case, we have

(2.12) F c a ( u ) = { d ( Λ ) F ( u ) if u P * 0 if u P *

and it follows that

(2.13) f c a ( s ) = s Λ f ( s ) t ( s - s ) ,

where

(2.14) t ( s ) = d ( Λ ) P * exp ( j 2 π u s ) d u

is the impulse response of an ideal lowpass filter (with sampled input and continuous output) having passband P * . This is the multidimensional version of the familiar Sampling Theorem.

In practical systems, the reconstruction is achieved by

(2.15) f ˆ c a ( s ) = s Λ f ( s ) d ( s - s )

where d (s) is the display aperture, which generally bears little resemblance to the ideal t (s) of Eq. (2.14). The display aperture is usually separable in space and time, d ( s ) = d s ( x , y ) d t ( t ) , where ds (x, y) may be Gaussian or rectangular, and dt (t) maybe exponential or rectangular, depending on the type of display system. In fact, a large part of the reconstruction filtering is often left to the spatiotemporal response of the human visual system. The main requirement is that the first temporal frequency repeat at zero spatial frequency (at 1/T for progressive scanning and 2/T for interlaced scanning, Fig. 2.2) be at least 50 Hz for large area flicker to be acceptably low.

If the display aperture is the ideal lowpass filter specified by Eq. (2.14), then the optimal sampling aperture is also an ideal lowpass filter with passband P * ; neither of these are realizable in practice. If the actual aperture of a given display device operating on a lattice Λis given, it is possible to determine the optimal sampling aperture according to a weighted-squared-error criterion [3]. This optimal sampling aperture, which will not be an ideal lowpass filter, is similarly not physically realizable, but it could at least form the design objective rather than the inappropriate ideal lowpass filter.

If sampling is performed in only one or two dimensions, the spectrum is replicated in the corresponding frequency dimensions. For the two cases of temporal and vertical-temporal sampling, respectively, we obtain

(2.16) F ( u , w ) = 1 T l = - F c a ( u , w + 1 T )

and

(2.17) F ( u , v , w ) = 1 d ( Λ y t ) k Λ y t * F c a ( u , ( v , w ) + k )

Consider first the case of pure temporal sampling, as in motion-picture film. The main parameters in this case are the sampling period T and the temporal aperture. As shown in Eq. (2.16), the signal spectrum is replicated in temporal frequency at multiples of 1/T. In analogy with one-dimensional signals, one might think that the time-varying image should be bandlimited in temporal frequency to 1/2T before sampling. However, this is not the case. To illustrate, consider the spectrum of an image undergoing translation with constant velocity v. This can model the local behavior in a large class of time-varying imagery. The assumption implies that f c ( x , t ) = f c 0 ( x - v t ) , where f c 0 ( x ) = f c ( x , 0 ) . A straightforward analysis [4] shows that F c ( u , w ) = F c 0 ( u ) δ ( u v + w ) , where δ(·) is the Dirac delta function. Thus, the spectrum of the time-varying image is not spread throughout spatiotemporal frequency space but rather it is concentrated around the plane u v + w = 0. When this translating image is sampled in the temporal dimension, these planes are parallel to each other and do not intersect, that is, there is no aliasing, even if the temporal bandwidth far exceeds 1/2T. This is most easily illustrated in two dimensions. Consider the case of vertical motion only. Figure 2.5 shows the vertical-temporal projection of the spectrum of the sampled image for different velocities v. Assume that the image is vertically bandlimited to B c/ph. It follows that, when the vertical velocity reaches 1/2TB picture heights per second (ph/s), the spectrum will extend out to the temporal frequency of 1/2T as shown in Fig. 2.5(b). At twice that velocity (1/TB), it would extend to a temporal frequency of 1/ T which might suggest severe aliasing. However, as seen in Fig. 2.5(c), there is no spectral overlap. To reconstruct the continuous signal correctly, however, a vertical-temporal filtering adapted to the velocity is required. Bandlimiting the signal to a temporal frequency of 1/2T before sampling would effectively cut the vertical resolution in half for this velocity. Note that the velocities mentioned above are not really very high. To consider some typical numbers, if T = 1/24 s, as in film, and B = 500 c/ph (corresponding to 1000 scanning lines), the velocity 1/2TB is about 1/42 ph/s. It should be noted that, if the viewer is tracking the vertical movement, the spectrum of the image on the retina will be far less tilted, again arguing against sharp temporal bandlimiting. (This is in fact a kind of motion-compensated filtering by the visual system.) The temporal camera aperture can roughly be modeled as the integration of fc for a period ≤. The choice of the value of the parameter Ta is a compromise between motion blur and signal-to-noise ratio.

FIGURE 2.5. Vertical-temporal projection of the spectrum of temporally sampled time-varying image with vertical motion of velocity v. (a)v = 0. (b)v = 1/2TB. (c)v = 1/TB .

Similar arguments can be made in the case of the two most popular vertical-temporal scanning structures, progressive scanning, and interlaced scanning. Referring to Fig. 2.6, the vertical-temporal spectrum of a vertically translating image at the same three velocities (assuming that 1/Y = 2B) is shown for these two scanning structures. For progressive scanning, there continues to be no spectral overlap, while for interlaced scanning, the spectral overlap can be severe at certain velocities (e.g., 1/TB as in Fig. 2.6(f)). This is a strong advantage for progressive scanning. Another disadvantage of interlaced scanning is that each field is spatially undersampled and pure spatial processing or interpolation is very difficult. An illustration in three dimensions of some of these ideas can be found in [5].

FIGURE 2.6. Vertical-temporal projection of spectrum of vertical-temporal sampled time-varying image with progressive and interlaced scanning. Progressive: (a)v = 0 . (b)v =1/2BT . (c)v = 1/BT . Interlaced: (d)v = 0 . (e)v = 1/2TB . (f)v = 1/TB .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123744562000037

Two-Dimensional Signals and Systems

John W. Woods , in Multidimensional Signal, Image, and Video Processing and Coding (Second Edition), 2012

Some Useful Fourier Transform Pairs

1.

Constant c:

 FT { c } = ( 2 π ) 2 c δ ( ω 1 , ω 2 )

in the unit cell [ π , + π ] 2
2.

Complex exponential for spatial frequency ( ν 1 , ν 2 ) [ π , + π ] 2 :

 FT { exp j ( ν 1 n 1 + ν 2 n 2 ) } = ( 2 π ) 2 δ ( ω 1 ν 1 , ω 2 ν 2 )

in the unit cell [ π , + π ] 2
3.

Constant in n 2 dimension:

 FT { x 1 ( n 1 ) } = 2 π X 1 ( ω 1 ) δ ( ω 2 ) ,

where X 11) is a 1-D Fourier transform and δ (ω2) is a 1-D impulse function
4.

Ideal lowpass filter (rectangular passband):

H r ( ω 1 , ω 2 ) = I ω c ( ω 1 ) I ω c ( ω 2 ) ,  where I ω c 1 | ω | ω c , 0  else ,

with ω c the cutoff frequency of the filter. Necessarily ω c π . The function I ω c is sometimes called an indicator function, since it indicates the passband. Taking the inverse 2-D Fourier transform of this separable function, we proceed as follows to obtain the ideal impulse response:

h r ( n 1 , n 2 ) = 1 ( 2 π ) 2 [ π , + π ] × [ π , + π ] H r ( ω 1 , ω 2 ) exp + j ( ω 1 n 1 + ω 2 n 2 ) d ω 1 d ω 2 = 1 2 π [ π , + π ] I ω c ( ω 1 ) exp + j ω 1 n 1 d ω 1 × 1 2 π [ π , + π ] I ω c ( ω 2 ) exp + j ω 2 n 2 d ω 2 = sin ω c n 1 π n 1 sin ω c n 2 π n 2  , < n 1 , n 2 < + .

5.

Ideal lowpass filter (circular passband) ω c < π

H c ( ω 1 , ω 2 ) = 1 ω 1 2 + ω 2 2 ω c , 0  else ,  for ( ω 1 , ω 2 ) [ π , + π ] × [ π , + π ] .

The inverse Fourier transform of this circular symmetric frequency response can be represented as the integral

h c ( n 1 , n 2 ) = 1 ( 2 π ) 2 ω 1 2 + ω 2 2 ω c 1 exp + j ( ω 1 n 1 + ω 2 n 2 ) d ω 1 d ω 2 = 1 ( 2 π ) 2 0 ω c π + π exp + j u ( n 1 cos θ + n 2 sin θ ) u d u d θ = 1 ( 2 π ) 2 0 ω c u π + π exp + j u r cos ( θ ϕ ) d θ d u = 1 ( 2 π ) 2 0 ω c u π + π exp + j u r cos θ d θ d u ,

where we have used, first, polar coordinates in frequency, ω 1 = u cos θ and ω 2 = u sin θ , and then in the next line, polar coordinates in space, n 1 = r cos ϕ and n 2 = r sin ϕ . Finally, the last line follows because the integral over θ does not depend on ϕ since it is an integral over the full period 2π. The inner integral over θ can now be recognized in terms of the zeroth-order Bessel function of the first kind J 0(x), with integral representation [4, 5]

J 0 ( x ) = 1 2 π π + π cos x cos θ d θ = 1 2 π π + π cos x sin θ d θ .

To see this, we note the integral

π + π exp + j u r cos θ d θ = π + π cos u r cos θ d θ ,

since the imaginary part, via Euler's formula, is an odd function integrated over even limits, and hence is zero. So, continuing, we can write

h c ( n 1 , n 2 ) = 1 ( 2 π ) 2 0 ω c u π + π exp + j u r cos θ d θ d u = 1 2 π 0 ω c u J 0 ( u r ) d u = ω c 2 π r J 1 ( ω c r ) = ω c 2 π n 1 2 + n 2 2 J 1 ( ω c n 1 2 + n 2 2 )  ,

where J 1 is the first-order Bessel function of the first kind J 1 (x), satisfying the known relation [6, p. 484]

x J 1 ( x ) = 0 x u J 0 ( u ) d u .

Comparing these two ideal lowpass filters, one with a square passband and the other circular with diameter equal to a side of the square, we get the two impulse responses along the n 1 axis:

h r ( n 1 , 0 ) = ω c π sin ω c n 1 π n 1   and h c ( n 1 , 0 ) = ω c 2 π n 1 J 1 ( ω c n 1 ) .

These 1-D sequences are plotted via MATLAB in Figure 1.2–3. Note their similarity.

Figure 1.2–3. Two impulse responses of 2-D ideal lowpass filters. The solid curve is "rectangular" and the dashed curve is "circular."

Example 1.2–4

Fourier Transform of Separable Signal

Let x ( n 1 , n 2 ) = x 1 ( n 1 ) x 2 ( n 2 ) , then when we compute the Fourier transform, the following simplification arises:

X ( ω 1 , ω 2 ) = n 1 n 2 x ( n 1 , n 2 ) exp j ( ω 1 n 1 + ω 2 n 2 ) = n 1 n 2 x 1 ( n 1 ) x 2 ( n 2 ) exp j ( ω 1 n 1 + ω 2 n 2 ) = n 1 x 1 ( n 1 ) [ n 2 x 2 ( n 2 ) exp j ω 2 n 2 ] exp j ω 1 n 1 = [ n 1 x 1 ( n 1 ) exp j ω 1 n 1 ] [ n 2 x 2 ( n 2 ) exp j ω 2 n 2 ] = X 1 ( ω 1 ) X 2 ( ω 2 ) .

A consequence of this result is that in 2-D signal processing, multiplication in the spatial domain does not always lead to convolution in the frequency domain! Can you reconcile this fact with the basic 2-D Fourier transform property 3? (See problem 9 at the end of this chapter.)

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123814203000011

REVIEWS

Wing-Kuen Ling , in Nonlinear Digital Filters, 2007

Based on the impulse and frequency responses

If the impulse response of a digital filter has finite support or finite length, then the digital filter is called the finite impulse response (FIR). Otherwise, it is called the infinite impulse response (IIR). If the transfer function of the digital filter is rational, then the digital filter is called rational. It is worth noting that IIR filters are not necessarily rational. For example, an ideal lowpass filter is not rational but it is IIR. If the impulse response of the digital filter is symmetric, then the digital filter is called symmetric. On the other hand, if the impulse response of the digital filter is antisymmetric, then the digital filter is called antisymmetric. It is worth noting that many digital filters are neither symmetric nor antisymmetric. If the phase response of the digital filter is linear, the digital filter is called the linear phase. Otherwise, it is called the nonlinear phase. All symmetric and antisymmetric digital filters are linear phase. It is worth noting that not all FIR filters are linear phase and not all IIR filters are nonlinear phase. For examples, if the FIR filter is neither symmetric nor antisymmetric, then it is nonlinear phase. Also, if both the numerator and denominator of a rational IIR filter is linear phase, then the IIR filter is linear phase. Linear phase digital filters are used in many signal processing applications. In particular, they are extensively employed in image and video signal processing because images and videos are phase sensitive. Figure 2.2 shows both the impulse and frequency responses of a symmetric FIR filter. It can be seen from Figure 2.2a that the impulse response is symmetric and finite length, so the phase response is linear, as shown in Figure 2.2c. Figure 2.3 shows both the impulse and frequency responses of an antisymmetric FIR filter. It can be seen from Figure 2.3a that the impulse response is antisymmetric and finite length, so the phase response is also linear as shown in Figure 2.3c. Figure 2.4 shows both the impulse and frequency responses of an IIR filter. It can be seen from Figure 2.4a that the impulse response is infinite length. Figure 2.4c shows that the phase response is nonlinear.

Figure 2.2. (a) Impulse response; (b) magnitude response; (c) phase response of a symmetric FIR filter.

Figure 2.3. (a) Impulse response; (b) magnitude response; (c) phase response of an antisymmetric FIR filter.

Figure 2.4. (a) Impulse response; (b) magnitude response; (c) phase response of an IIR filter.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123725363500028

The design of IIR filters

Bob Meddins , in Introduction to Digital Signal Processing, 2000

4.2 FILTER BASICS

Before we move on to the design of digital filters, it is probably worth having a very brief recap on filters.

An ideal filter will have a constant gain of at least unity in the passband and a constant gain of zero in the stopband. Also, the gain should increase from the zero of the stopband to the higher gain of the passband at a single frequency, i.e. it should have a 'brick wall' profile. The magnitude responses of ideal lowpass, highpass, bandpass and bandstop filters are as shown in Fig. 4.1(a), (b), (c) and (d).

Figure 4.1.

It is impossible to design a practical filter, either analogue or digital, that will have these profiles. Figure 4.2, for example, shows the magnitude response for a practical lowpass filter. The passband and stopband are not perfectly flat, the 'shoulder' between these two regions is very rounded and the transition between them, the 'roll-off' region, takes place over a wide frequency range. The closer we require our filter to agree with the ideal characteristics, the more complicated is the filter transfer function.

Figure 4.2.

As the gain of a real filter does not drop vertically between the passband and stopband, we need some way of defining the 'cut-off' frequencies of filters, i.e. the effective end of the passband. The point chosen is the '−3 dB' point. This is the frequency at which the gain has fallen by 3 dB, or to 1/√2 of its maximum value (gain in dB = 20 log10 [gain]).

If you are rusty on the basic principles of analogue filters, this would be a good time to do some background reading. Some keywords to look for are: lowpass, highpass, bandstop, bandpass, cut-off frequency, roll-off, first, second (etc.) order, passive and active filters, Bode plots and dB. Howatson (1996) is just one of an abundance of circuit theory and analysis texts which will be relevant.

Much work has been carried out into the design of analogue filters and, as a result, standard design equations for analogue filters with very high specifications are available. However, as has been stressed earlier in this book, the characteristics of all analogue systems alter due to temperature changes and ageing. It is also impossible for two analogue systems to perform identically. Digital filters do not have these defects. They are also much more versatile than analogue filters in that they are programmable.

We will now look at various methods of designing digital filters.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750650489500064