Consider a Continuoustime Ideal Low Pass Filter S Whose Frequency Response is
SAW Filters in Digital Communications
Colin Campbell , in Surface Acoustic Wave Devices and their Signal Processing Applications, 1989
17.3.2 Nyquist Bandwidth Theorem
Now consider the intermingled sinc function [i.e., (sin X)/X ] impulse responses of the ideal lowpass filter when it is subjected to a sequence of uniformly spaced delta-function impulse voltages of random polarity, as sketched in Fig. 17.8. From this example, demonstrated for three input impulses uniformly spaced at intervals T s, it can be deduced that it should be possible to detect any one of the sinc function responses without interference from any of the others. (When one sinc function response is maximum, all others have zero amplitude.) This is the result implicit in Nyquist's Bandwidth Theorem [1] for impulses applied at the synchronous rate f s = 1/T s = 2f N and represents the condition for complete freedom from intersymbol interference (ISI). Note that while the above considerations have tacitly assumed zero group delay through the filter for ease of illustration, the inclusion of a linear group delay term τ would merely cause each of the sinc function impulse responses to experience the same delay, allowing the maintenance of ISI-free transmission under ideal conditions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121573454500217
Signal Sampling and Quantization
Lizhe Tan , Jean Jiang , in Digital Signal Processing (Third Edition), 2019
2.6 Problems
2.1. Given an analog signal
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the original signal;
- (b)
-
sketch the spectrum of the sampled signal from 0 up to 20 kHz.
2.2. Given an analog signal
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal.
2.3. Given an analog signal
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal.
2.4. Given an analog signal
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal.
2.5. Given an analog signal
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal;
- (c)
-
determine the frequency/frequencies of aliasing noise.
2.6. Assuming a continuous signal is given as
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal;
- (c)
-
determine the frequency/frequencies of aliasing noise.
2.7. Assuming a continuous signal is given as
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal;
- (c)
-
determine the frequency/frequencies of aliasing noise.
2.8. Assuming a continuous signal is given as
sampled at a rate of 8000 Hz,
- (a)
-
sketch the spectrum of the sampled signal up to 20 kHz;
- (b)
-
sketch the recovered analog signal spectrum if an ideal lowpass filter with a cutoff frequency of 4 kHz is used to filter the sampled signal in order to recover the original signal;
- (c)
-
determine the frequency/frequencies of aliasing noise.
2.9. Given the following second-order anti-aliasing lowpass filter (Fig. 2.37) which is a Butterworth type, determine the values of circuit elements if we want the filter to have a cutoff frequency of 1000 Hz.
2.10. From Problem 2.9, determine the percentage of aliasing level at the frequency of 500 Hz, assuming that the sampling rate is 4000 Hz.
2.11. Given the following second-order anti-aliasing lowpass filter (Fig. 2.38) which is a Butterworth type, determine the values of circuit elements if we want the filter to have a cutoff frequency of 800 Hz.
2.12. From Problem 2.11, determine the percentage of aliasing level at the frequency of 400 Hz, assuming that the sampling rate is 4000 Hz.
2.13. Given a DSP system in which a sampling rate of 8000 Hz is used and the anti-aliasing filter is a second-order Butterworth lowpass filter with a cutoff frequency of 3.2 kHz, determine
- (a)
-
the percentage of aliasing level at the cutoff frequency;
- (b)
-
the percentage of aliasing level at the frequency of 1000 Hz.
2.14. Given a DSP system in which a sampling rate of 8000 Hz is used and the anti-aliasing filter is a Butterworth lowpass filter with a cutoff frequency 3.2 kHz, determine the order of the Butterworth lowpass filter for the percentage of aliasing level at the cutoff frequency required to be less than 10%.
2.15. Given a DSP system in which a sampling rate of 8000 Hz is used and the anti-aliasing filter is a second-order Butterworth lowpass filter with a cutoff frequency of 3.1 kHz, determine
- (a)
-
the percentage of aliasing level at the cutoff frequency;
- (b)
-
the percentage of aliasing level at the frequency of 900 Hz.
2.16. Given a DSP system in which a sampling rate of 8000 Hz is used and the anti-aliasing filter is a Butterworth lowpass filter with a cutoff frequency 3.1 kHz, determine the order of the Butterworth lowpass filter for the percentage of aliasing level at the cutoff frequency required to be less than 10%.
2.17. Given a DSP system (Fig. 2.39) with a sampling rate of 8000 Hz and assuming that the hold circuit is used after DAC, determine
- (a)
-
the percentage of distortion at the frequency of 3200 Hz;
- (b)
-
the percentage of distortion at the frequency of 1500 Hz.
2.18. A DSP system is given with the following specifications:
Design requirements:
-
Sampling rate 20,000 Hz;
-
Maximum allowable gain variation from 0 to 4000 Hz = 2 dB;
-
40 dB rejection at the frequency of 16,000 Hz; and
-
Butterworth filter assumed.
Determine the cutoff frequency and order for the anti-image filter.
2.19. Given a DSP system with a sampling rate of 8000 Hz and assuming that the hold circuit is used after DAC, determine
- (a)
-
the percentage of distortion at the frequency of 3000 Hz;
- (b)
-
the percentage of distortion at the frequency of 1600 Hz.
2.20. A DSP system (Fig. 2.40) is given with the following specifications:
Design requirements:
-
Sampling rate 22,000 Hz;
-
Maximum allowable gain variation from 0 to 4000 Hz = 2 dB;
-
40 dB rejection at the frequency of 18,000 Hz; and
-
Butterworth filter assumed
Determine the cutoff frequency and order for the anti-image filter.
2.21. Given the 2-bit flash ADC unit with an analog sample-and-hold voltage of 2 V shown in Fig. 2.41, determine the output bits.
2.22. Given the R-2R DAC unit with a 2-bit value as b 1 b 0 = 01 shown in Fig. 2.42, determine the converted voltage.
2.23. Given the 2-bit flash ADC unit with an analog sample-and-hold voltage of 3.5 V shown in Fig. 2.41, determine the output bits.
2.24. Given the R-2R DAC unit with 2-bit values as b 1 b 0 = 11 and b 1 b 0 = 10, respectively, and shown in Fig. 2.42, determine the converted voltages.
2.25. Assuming that a 4-bit ADC channel accepts analog input ranging from 0 to 5 V, determine the following:
- (a)
-
Number of quantization levels;
- (b)
-
Step size of quantizer or resolution;
- (c)
-
Quantization level when the analog voltage is 3.2 V;
- (d)
-
Binary code produced by the ADC;
- (e)
-
Quantization error.
2.26. Assuming that a 5-bit ADC channel accepts analog input ranging from 0 to 4 V, determine the following:
- (a)
-
Number of quantization levels;
- (b)
-
Step size of quantizer or resolution;
- (c)
-
Quantization level when the analog voltage is 1.2 V;
- (d)
-
Binary code produced by the ADC;
- (e)
-
Quantization error.
2.27. Assuming that a 3-bit ADC channel accepts analog input ranging from −2.5 to 2.5 V, determine the following:
- (a)
-
Number of quantization levels;
- (b)
-
Step size of quantizer or resolution;
- (c)
-
Quantization level when the analog voltage is −1.2 V;
- (d)
-
Binary code produced by the ADC;
- (e)
-
Quantization error.
2.28. Assuming that an 8-bit ADC channel accepts analog input ranging from −2.5 to 2.5 V, determine the following:
- (a)
-
Number of quantization levels;
- (b)
-
Step size of quantizer or resolution;
- (c)
-
Quantization level when the analog voltage is 1.5 V;
- (d)
-
Binary code produced by the ADC;
- (e)
-
Quantization error.
2.29. If the analog signal to be quantized is a sinusoidal waveform, that is,
x(t) = 9.5 sin(2000 × πt), and if a bipolar quantizer uses 6 bits, determine
- (a)
-
Number of quantization levels;
- (b)
-
Quantization step size or resolution, Δ, assuming the signal range is from −10 to 10 V;
- (c)
-
The signal power to quantization noise power ratio.
2.30. For a speech signal, if the ratio of the RMS value over the absolute maximum value of the signal is given, that is, and the ADC bipolar quantizer uses 6 bits, determine
- (a)
-
Number of quantization levels;
- (b)
-
Quantization step size or resolution, Δ, if the signal range is 5 V;
- (c)
-
The signal power-to-quantization noise power ratio.
Computer problems with MATLAB: Use the MATLAB programs in Section 2.5 to solve the following problems.
2.31. Given a sinusoidal waveform of 100 Hz,
sample it at 8000 samples per second and
- (a)
-
Write a MATLAB program to quantize x(t) using a 6-bits bipolar quantizer to obtain the quantized signal x q , assuming that the signal range to be from −5 to 5 V;
- (b)
-
Plot the original signal and quantized signal;
- (c)
-
Calculate the SNR due to quantization using the MATLAB program.
2.32. Given a signal waveform,
sample it at 8000 samples per second and
- (a)
-
Write a MATLAB program to quantize x(t) using a 6-bits bipolar quantizer to obtain the quantized signal x q , assuming that the signal range to be from −5 to 5 V;
- (b)
-
Plot the original signal and quantized signal;
- (c)
-
Calculate the SNR due to quantization using the MATLAB program.
2.33. Given a speech signal sampled at 8000 Hz, as shown in Example 2.14,
- (a)
-
Write a MATLAB program to quantize x(t) using a 6-bits bipolar quantizer to obtain the quantized signal x q , assuming that the signal range is from −5 to 5 V;
- (b)
-
Plot the original speech waveform, quantized speech, and quantization error;
- (c)
-
Calculate the SNR due to quantization using the MATLAB program.
MATLAB Projects
2.34. Performance evaluation of speech quantization:
Given an original speech segment "speech.dat" sampled at 8000 Hz with each sample encoded in 16 bits, use Programs 2.3–2.5 and modify Program 2.2 to quantize the speech segment using 3–15 bits, respectively. The SNR in dB must be measured for each quantization. MATLAB function: "sound(x/max(abs(x)),fs)" can be used to evaluate sound quality, where "x" is the speech segment while "fs" is the sampling rate of 8000 Hz. In this project, create a plot of the measured SNR (dB) versus the number of bits and discuss the effect of the sound quality. For comparisons, plot the original speech and the quantized one using 3, 8, and 15 bits.
2.35. Performance evaluation of seismic data quantization:
The seismic signal, a measurement of the acceleration of ground motion, is required for applications in the study of geophysics. The seismic signal ("seismic.dat" provided by the USGS Albuquerque Seismological Laboratory) has a sampling rate of 15 Hz with 6700 data samples, and each sample is encoded using 32 bits. Quantizing each 32-bit sample down to the lower number of bits per sample can reduce the memory storage requirement with the reduced signal quality. Use Programs 2.3–2.5 and modify Program 2.2 to quantize the seismic data using 13, 15, 17,…, 31 bits. The SNR in dB must be measured for each quantization. Create a plot of the measured SNR (dB) versus the number of bits. For comparison, plot the seismic data and the quantized one using 13, 18, 25, and 31 bits.
Advanced Problems
2.36–2.38. Given the following sampling system (see Fig. 2.43), x s (t) = x(t)p(t)
2.36. If the pulse train used is depicted in Fig. 2.44
- (a)
-
Determine the Fourier series expansion for ;
- (b)
-
Determine X s (f) in terms of X s (f) using Fourier transform, that is,
- (c)
-
Determine spectral distortion referring to X s (f) for − f s /2 < f < f s /2.
2.37. If the pulse train used is depicted in Fig. 2.45
- (a)
-
Determine the Fourier series expansion for ;
- (b)
-
Determine X s (f) in terms of X(f) using Fourier transform, that is,
X s (f) = FT{x s (t)} = FT{x(t)p(t)};
- (c)
-
Determine spectral distortion referring to X(f) for − f/2 < f < f s /2.
2.38. If the pulse train used is depicted in Fig. 2.46
- (a)
-
Determine the Fourier series expansion for ;
- (b)
-
Determine X s (f) in terms of X(f) using Fourier transform, that
X s (f) = FT{x s (t)} = FT{x(t)p(t)};
- (c)
-
Determine spectral distortion referring to X(f) for − f s /2 < f < f s /2.
2.39. In Fig. 2.16, a Chebyshev lowpass filter is chosen to serve as anti-aliasing lowpass filter, where the magnitude frequency response of the Chebyshev filter with an order n is given by
where ɛ is the absolute ripple, and
- (a)
-
Derive the formula for the aliasing level; and
- (b)
-
When the sampling frequency is 8 kHz, a cutoff frequency is 3.4 kHz, ripple is 1 dB, and order equals n = 4, determine the aliasing level at frequency of 1 kHz.
2.40. Given the following signal
with the signals ranging from to , determine the signal to quantization noise power ratio using m bits.
2.41. Given the following modulated signal
with the signal ranging from − A 1 A 2 to A 1 A 2, determine the signal to quantization noise power ratio using m bits.
2.42. Assume that truncation of the continuous signal x(n) in Problem 2.40 is defined as:
where − Δ < e q (n) ≤ 0 and . The quantized noise has the distribution given in Fig. 2.47, determine the SNR using m bits.
2.43. Assume that truncation of the continuous signal x(n) in Problem 2.41 is defined as
where 0 ≤ e q (n) < Δ and Δ = 2 A 1 A 2/2 m . The quantized noise has the distribution given in Fig. 2.48, determine the SNR using m bits.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128150719000026
Probability, Random Variables, and Random Processes
Ali Grami , in Introduction to Digital Communications, 2016
4.3.10 Sampling Theorem of Random Signals
Sampling is an indispensable signal-processing tool to bridge between continuous-time signals and discrete-time signals and, in turn, paves the way for digital signal processing and transmission. A wide-sense stationary process X(t) is called band-limited if its power spectral density vanishes for frequencies beyond a specific frequency W and it has finite power, as shown below:
(4.59)
X(t) can be reconstructed from the sequence of its samples taken at a minimum rate of twice the highest frequency component (i.e., . The sampled version of the process equals the original in the mean-square sense for all time t, and it is as follows:
(4.60)
where is the random variable obtained by sampling the process X(t) at . It can be shown that the mean-square error between the original process and the sampled version is zero, i.e., we have the following:
(4.61)
Suppose we have for , i.e., X(t) is not band-limited to W Hz. If X(t ) is applied to an ideal lowpass filter whose bandwidth is W Hz and the resulting output Y(t) is sampled at a rate equal to 2W, then the resulting mean-square error between the original (unfiltered) signal X(t) and the sampled version Ŷ(t) is as follows:
(4.62)
Example 4.43
The power spectral density of a wide-sense stationary random process is as follows:
The signal is passed through an ideal lowpass filter whose bandwidth is W Hz, and its output is then sampled at 2W Hz. Determine the mean-square value of the sampling error for a) MHz and b) kHz.
Solution
This problem highlights the relationship between the bandwidth of a random signal and the sampling rate.
- (a)
-
Using (4.61), we have , as the sampling rate is twice the highest frequency component.
- (b)
-
Since the sampling rate is not high enough, we must use (4.62) to find the mean-square error:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124076822000041
Basic Linear Filtering with Application to Image Enhancement
Alan C. Bovik , Scott T. Acton , in The Essential Guide to Image Processing, 2009
10.3.2 Ideal Lowpass Filter
As an alternative to the average filter, a filter may be designed explicitly with no sidelobes by forcing the frequency response to be zero outside of a given radial cutoff frequency Ω c :
10.26
or outside of a rectangle defined by cutoff frequencies along the U- and V-axes:
10.27
Such a filter is called an ideal lowpass filter (ideal LPF) because of its idealized characteristic. We will study (10.27) rather than (10.26) since it is easier to describe the impulse response of the filter. If the region of frequencies passed by (10.26) is square, then there is little practical difference in the two filters if .
The impulse response of the ideal lowpass filter (10.26) is given explicitly by
10.28
where . Despite the seemingly "ideal" nature of this filter, it has some major drawbacks. First, it cannot be implemented exactly as a linear convolution, since the impulse response (10.28) is infinite in extent (it never decays to zero). Therefore, it must be approximated. One way is to simply truncate the impulse response, which in image processing applications is often satisfactory. However, this has the effect of introducing ripple near the frequency discontinuity, producing unwanted noise leakage. The introduced ripple is a manifestation of the well-known Gibbs phenomena studied in standard signal processing texts [1]. The ripple can be reduced by using a tapered truncation of the impulse response, e.g., by multiplying (10.28) with a Hamming window [1]. If the response is truncated to image size , then the ripple will be restricted to the vicinity of the locus of cutoff frequencies, which may make little difference in the filter performance. Alternately, the ideal LPF can be approximated by a Butterworth filter or other ideal LPF approximating function. The Butterworth filter has frequency response [2]
10.29
and, in principle, can be made to agree with the ideal LPF with arbitrary precision by taking the filter order K large enough. However, (10.29) also has an infinite-extent impulse response with no known closed-form solution. Hence, to be implemented it must also be spatially truncated (approximated), which reduces the approximation effectiveness of the filter [2].
It should be noted that if a filter impulse response is truncated, then it should also be slightly modified by adding a constant level to each coefficient. The constant should be selected such that the filter coefficients sum to unity. This is commonly done since it is generally desirable that the response of the filter to the (0, 0) spatial frequency be unity, and since for any filter
10.30
The second major drawback of the ideal LPF is the phenomena known as ringing. This term arises from the characteristic response of the ideal LPF to highly concentrated bright spots in an image. Such spots are impulse-like, and so the local response has the appearance of the impulse response of the filter. For the circularly-symmetric ideal LPF in (10.26), the response consists of a blurred version of the impulse surrounded by sinc-like spatial sidelobes, which have the appearances of rings surrounding the main lobe.
In practical application, the ringing phenomena create more of a problem because of the edge response of the ideal LPF. In the simplistic case, the image consists of a single one-dimensional step edge: for , otherwise. Figure 10.4 depicts the response of the ideal LPF with impulse response (10.28) to the step edge. The step response of the ideal LPF oscillates (rings) because the sinc function oscillates about the zero level. In the convolution sum, the impulse response alternately makes positive and negative contribution, creating overshoots and undershoots in the vicinity of the edge profile. Most digital images contain numerous step-like light-to-dark or dark-to-light image transitions; hence, application of the ideal LPF will tend to contribute considerable ringing artifacts to images. Since edges contain much of the significant information about the image, and since the eye tends to be sensitive to ringing artifacts, often the ideal LPF and its derivatives are not a good choice for image smoothing. However, if it is desired to strictly bandlimit the image as closely as possible, then the ideal LPF is a necessary choice.
Once an impulse response for an approximation to the ideal LPF has been decided, then the usual approach to implementation again entails zero-padding both the image and the impulse response, using the periodic extension, taking the product of their DFTs (using an FFT algorithm), and defining the result as the inverse DFT. This was done in the example of Fig. 10.5, which depicts application of the ideal LPF using two cutoff frequencies. This was implemented using a truncated ideal LPF without any special windowing. The dominant characteristic of the filtered images is the ringing, manifested as a strong mottling in both images. A very strong oriented ringing can be easily seen near the upper and lower borders of the image.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012374457900010X
Signals, Systems, and Spectral Analysis
Ali Grami , in Introduction to Digital Communications, 2016
3.10.1 Ideal Filters
An ideal filter exactly passes signals at certain sets of frequencies and completely rejects the rest. In order to avoid distortion in the filtering process, a filter should ideally have a flat magnitude characteristic and a linear phase characteristic over the passband of the filter (the frequency range of interest). The most common types of filters are the low-pass filter (LPF), high-pass filter (HPF), band-pass filter (BPF), and band-stop filter (BSF), which pass low, high, intermediate, and all but intermediate frequencies, respectively. Figure 3.37 shows the magnitude and phase responses of ideal LPF, HPF, BPF, and BSF. Most communication filters are of LPF and BPF types.
For a physically-realizable filter, its impulse response h(t) must be causal. In the frequency domain, this condition is equivalent to the Paley-Wiener criterion , which states that the necessary and sufficient condition for to be the magnitude response of a causal (realizable) filter is as follows:
(3.85)
This condition is not met if over a finite band of frequencies (i.e., a filter cannot perfectly reject any band of frequencies). However, if at a single frequency (or a set of discrete frequencies), the condition may be met. According to this criterion, ideal filters are clearly noncausal. Some continuous magnitude responses, such as , are not allowable magnitude responses for causal filters because the integral in (3.85) does not give a finite result.
Example 3.50
Determine the frequency responses and impulse responses of an ideal LPF and BPF, and discuss the causality issue.
Solution
The frequency response of an ideal LPF with a bandwidth of W Hz and its impulse response are respectively defined as follows:
and
The impulse response for is shown in Figure 3.38a. As reflected in (3.85), does not meet the Paley-Wiener criterion, and h(t) is thus not causal. One practical approach to filter design is to cut off the tail of h(t) for . In order to have the truncated version of h(t) as close as possible to the ideal impulse response, the delay t 0 must be as large as possible. Theoretically, a delay of infinite duration is needed to realize the ideal characteristics. But practically, a delay of just a few times can make the truncated version reasonably close to the ideal one. As an example, for an audio LPF filter with a bandwidth of 20 kHz, a delay of 0.1 milliseconds would be a reasonable choice.
The frequency response of an ideal BPF, with a bandwidth of 2W Hz and centered around the frequency f c , as well as its impulse response are respectively defined as follows:
and
The impulse response for is shown in Figure 3.38b. If , it is reasonable to view h BPF (t) as the slowly-varying envelope sinc(2Wt) modulating the high-frequency oscillatory signal cos(2πf c t) and shifted to the right by t 0.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012407682200003X
QUANTIZATION IN DIGITAL FILTERS
Wing-Kuen Ling , in Nonlinear Digital Filters, 2007
Theorem 3.1
Assume that
Then the SNR of the coded system shown in Figure 3.3b will be higher than that of the conventional system shown in Figure 3.3a.
Proof:
If Q (μ (n)) ≈ μ T (n)p, then for the conventional system shown in Figure 3.3a, we have:
(3.3)
If we regard the first order term (m = 1) in the signal band as the signal component and all higher order terms (m ≥ 2) in the signal band as the quantization noise, then the SNR can be estimated as follows:
(3.4)
Since H (ω) is an ideal lowpass filter, then we have:
(3.5)
Now consider the coded system shown in Figure 3.3b.
Denote the input to the quantizer as and where there are 2m – 1 terms in Since , we have . As we assume that Q (μ(n))≈μ T (n)p, so we have and
(3.6)
As a result,
(3.7)
Since and u (n) is bandlimited, terms in for m = 1, 2,…, M do not overlap each other in the frequency spectrum. As H (ω) is an ideal lowpass filter, eqn (3.7) can be further simplified as:
(3.8)
Denote a vector a ≡ [a 1 · · · aNa ] T and where fliplr (a T) ≡ [aNa … a 1] and 0 (4 m –2) Na denotes a zero row vector with length (4m – 2)Na .
Denote the discrete Fourier transform of as and its k th element as .
Denote a vector with its k th element as
Denote the inverse discrete Fourier transform of as and its k th element as . Then Since is bandlimited and H (ω) is an ideal lowpass filter,
(3.9)
,
If then the SNR of the coded system will be larger than that of the conventional system.
This completes the proof.
If the periodic code only consists of a single coefficient, then the coded system will become a modulated system. Denote the factorial operator as!, then we have the following corollary:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012372536350003X
Video Sampling and Interpolation
Eric Dubois , in The Essential Guide to Video Processing, 2009
2.3 SAMPLING AND RECONSTRUCTION OF CONTINUOUS TIME-VARYING IMAGERY
The process for sampling a time-varying image can be approximated by the model shown in Fig. 2.4. In this model, the light arriving on the sensor is collected and weighted in space and time by the sensor aperture a(s) to give the output
(2.8)
where it is assumed, here, that the sensor aperture is space and time invariant. The resulting signal fca (s) is then sampled in an ideal fashion on the sampling structure Ψ,
(2.9)
By definingha(s) = a(−s), it is seen that the aperture weighting is a linear shift-invariant filtering operation, that is, the convolution of fc (s) with ha (s)
(2.10)
Thus, if fc(s) has a Fourier transform Fc(u), then Fca(u) = Fc(u)Ha(u), where Ha(u) is the Fourier transform of the aperture impulse response. In typical acquisition systems, the sampling aperture can be modeled as a rectangular or Gaussian function.
If the sampling structure is a lattice Λ, then the effect in the frequency domain of the sampling operation is given by [1]
(2.11)
in other words, the continuous signal spectrum Fca (u) is replicated on the points of the reciprocal lattice. The terms in the sum of Eq. (2.11), other than for k = 0, are referred to as spectral repeats. There are two main consequences of the sampling process. The first is that these spectral repeats, if not removed by the display/viewer system, may be visible in the form of flicker, line structure, or dot patterns. The second is that, if the regions of support of Fca (u) and Fca (u+k) have nonzero intersection for some values we have aliasing; a frequency ua in this intersection can represent both the frequencies ua and ua –k in the original signal. Thus, to avoid aliasing, the spectrum Fca (u) should be confined to a unit cell of Λ*; this can be accomplished to some extent by the sampling aperture ha . Aliasing is particularly problematic because once introduced it is difficult to remove, since there is more than one acceptable interpretation of the observed data. Aliasing is a familiar effect that tends to be localized to those regions of the image with high frequency details. It can be seen as moiré patterns in such periodic-like patterns as fishnets and venetian blinds, and as staircase-like effects on high-contrast oblique edges. The aliasing is particularly visible and annoying when these patterns are moving. Aliasing is controlled by selecting a sufficiently dense sampling structure and through the prefiltering effect of the sampling aperture.
If the support of Fca (u) is confined to a unit cell of Λ*, then it is possible to reconstruct fca exactly from the samples. In this case, we have
(2.12)
and it follows that
(2.13)
where
(2.14)
is the impulse response of an ideal lowpass filter (with sampled input and continuous output) having passband This is the multidimensional version of the familiar Sampling Theorem.
In practical systems, the reconstruction is achieved by
(2.15)
where d (s) is the display aperture, which generally bears little resemblance to the ideal t (s) of Eq. (2.14). The display aperture is usually separable in space and time, where ds (x, y) may be Gaussian or rectangular, and dt (t) maybe exponential or rectangular, depending on the type of display system. In fact, a large part of the reconstruction filtering is often left to the spatiotemporal response of the human visual system. The main requirement is that the first temporal frequency repeat at zero spatial frequency (at 1/T for progressive scanning and 2/T for interlaced scanning, Fig. 2.2) be at least 50 Hz for large area flicker to be acceptably low.
If the display aperture is the ideal lowpass filter specified by Eq. (2.14), then the optimal sampling aperture is also an ideal lowpass filter with passband neither of these are realizable in practice. If the actual aperture of a given display device operating on a lattice Λis given, it is possible to determine the optimal sampling aperture according to a weighted-squared-error criterion [3]. This optimal sampling aperture, which will not be an ideal lowpass filter, is similarly not physically realizable, but it could at least form the design objective rather than the inappropriate ideal lowpass filter.
If sampling is performed in only one or two dimensions, the spectrum is replicated in the corresponding frequency dimensions. For the two cases of temporal and vertical-temporal sampling, respectively, we obtain
(2.16)
and
(2.17)
Consider first the case of pure temporal sampling, as in motion-picture film. The main parameters in this case are the sampling period T and the temporal aperture. As shown in Eq. (2.16), the signal spectrum is replicated in temporal frequency at multiples of 1/T. In analogy with one-dimensional signals, one might think that the time-varying image should be bandlimited in temporal frequency to 1/2T before sampling. However, this is not the case. To illustrate, consider the spectrum of an image undergoing translation with constant velocity v. This can model the local behavior in a large class of time-varying imagery. The assumption implies that where A straightforward analysis [4] shows that where δ(·) is the Dirac delta function. Thus, the spectrum of the time-varying image is not spread throughout spatiotemporal frequency space but rather it is concentrated around the plane When this translating image is sampled in the temporal dimension, these planes are parallel to each other and do not intersect, that is, there is no aliasing, even if the temporal bandwidth far exceeds 1/2T. This is most easily illustrated in two dimensions. Consider the case of vertical motion only. Figure 2.5 shows the vertical-temporal projection of the spectrum of the sampled image for different velocities v. Assume that the image is vertically bandlimited to B c/ph. It follows that, when the vertical velocity reaches 1/2TB picture heights per second (ph/s), the spectrum will extend out to the temporal frequency of 1/2T as shown in Fig. 2.5(b). At twice that velocity (1/TB), it would extend to a temporal frequency of 1/ T which might suggest severe aliasing. However, as seen in Fig. 2.5(c), there is no spectral overlap. To reconstruct the continuous signal correctly, however, a vertical-temporal filtering adapted to the velocity is required. Bandlimiting the signal to a temporal frequency of 1/2T before sampling would effectively cut the vertical resolution in half for this velocity. Note that the velocities mentioned above are not really very high. To consider some typical numbers, if T = 1/24 s, as in film, and B = 500 c/ph (corresponding to 1000 scanning lines), the velocity 1/2TB is about 1/42 ph/s. It should be noted that, if the viewer is tracking the vertical movement, the spectrum of the image on the retina will be far less tilted, again arguing against sharp temporal bandlimiting. (This is in fact a kind of motion-compensated filtering by the visual system.) The temporal camera aperture can roughly be modeled as the integration of fc for a period ≤. The choice of the value of the parameter Ta is a compromise between motion blur and signal-to-noise ratio.
Similar arguments can be made in the case of the two most popular vertical-temporal scanning structures, progressive scanning, and interlaced scanning. Referring to Fig. 2.6, the vertical-temporal spectrum of a vertically translating image at the same three velocities (assuming that 1/Y = 2B) is shown for these two scanning structures. For progressive scanning, there continues to be no spectral overlap, while for interlaced scanning, the spectral overlap can be severe at certain velocities (e.g., 1/TB as in Fig. 2.6(f)). This is a strong advantage for progressive scanning. Another disadvantage of interlaced scanning is that each field is spatially undersampled and pure spatial processing or interpolation is very difficult. An illustration in three dimensions of some of these ideas can be found in [5].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123744562000037
Two-Dimensional Signals and Systems
John W. Woods , in Multidimensional Signal, Image, and Video Processing and Coding (Second Edition), 2012
Some Useful Fourier Transform Pairs
- 1.
-
Constant c:
- 2.
-
Complex exponential for spatial frequency :
- 3.
-
Constant in n 2 dimension:
- 4.
-
Ideal lowpass filter (rectangular passband):
- 5.
-
Ideal lowpass filter (circular passband)
The inverse Fourier transform of this circular symmetric frequency response can be represented as the integral
where we have used, first, polar coordinates in frequency, and , and then in the next line, polar coordinates in space, and . Finally, the last line follows because the integral over θ does not depend on ϕ since it is an integral over the full period 2π. The inner integral over θ can now be recognized in terms of the zeroth-order Bessel function of the first kind J 0(x), with integral representation [4, 5]
To see this, we note the integral
since the imaginary part, via Euler's formula, is an odd function integrated over even limits, and hence is zero. So, continuing, we can write
where J 1 is the first-order Bessel function of the first kind J 1 (x), satisfying the known relation [6, p. 484]
Comparing these two ideal lowpass filters, one with a square passband and the other circular with diameter equal to a side of the square, we get the two impulse responses along the n 1 axis:
These 1-D sequences are plotted via MATLAB in Figure 1.2–3. Note their similarity.
Example 1.2–4
Fourier Transform of Separable Signal
Let , then when we compute the Fourier transform, the following simplification arises:
A consequence of this result is that in 2-D signal processing, multiplication in the spatial domain does not always lead to convolution in the frequency domain! Can you reconcile this fact with the basic 2-D Fourier transform property 3? (See problem 9 at the end of this chapter.)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123814203000011
REVIEWS
Wing-Kuen Ling , in Nonlinear Digital Filters, 2007
Based on the impulse and frequency responses
If the impulse response of a digital filter has finite support or finite length, then the digital filter is called the finite impulse response (FIR). Otherwise, it is called the infinite impulse response (IIR). If the transfer function of the digital filter is rational, then the digital filter is called rational. It is worth noting that IIR filters are not necessarily rational. For example, an ideal lowpass filter is not rational but it is IIR. If the impulse response of the digital filter is symmetric, then the digital filter is called symmetric. On the other hand, if the impulse response of the digital filter is antisymmetric, then the digital filter is called antisymmetric. It is worth noting that many digital filters are neither symmetric nor antisymmetric. If the phase response of the digital filter is linear, the digital filter is called the linear phase. Otherwise, it is called the nonlinear phase. All symmetric and antisymmetric digital filters are linear phase. It is worth noting that not all FIR filters are linear phase and not all IIR filters are nonlinear phase. For examples, if the FIR filter is neither symmetric nor antisymmetric, then it is nonlinear phase. Also, if both the numerator and denominator of a rational IIR filter is linear phase, then the IIR filter is linear phase. Linear phase digital filters are used in many signal processing applications. In particular, they are extensively employed in image and video signal processing because images and videos are phase sensitive. Figure 2.2 shows both the impulse and frequency responses of a symmetric FIR filter. It can be seen from Figure 2.2a that the impulse response is symmetric and finite length, so the phase response is linear, as shown in Figure 2.2c. Figure 2.3 shows both the impulse and frequency responses of an antisymmetric FIR filter. It can be seen from Figure 2.3a that the impulse response is antisymmetric and finite length, so the phase response is also linear as shown in Figure 2.3c. Figure 2.4 shows both the impulse and frequency responses of an IIR filter. It can be seen from Figure 2.4a that the impulse response is infinite length. Figure 2.4c shows that the phase response is nonlinear.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123725363500028
The design of IIR filters
Bob Meddins , in Introduction to Digital Signal Processing, 2000
4.2 FILTER BASICS
Before we move on to the design of digital filters, it is probably worth having a very brief recap on filters.
An ideal filter will have a constant gain of at least unity in the passband and a constant gain of zero in the stopband. Also, the gain should increase from the zero of the stopband to the higher gain of the passband at a single frequency, i.e. it should have a 'brick wall' profile. The magnitude responses of ideal lowpass, highpass, bandpass and bandstop filters are as shown in Fig. 4.1(a), (b), (c) and (d).
It is impossible to design a practical filter, either analogue or digital, that will have these profiles. Figure 4.2, for example, shows the magnitude response for a practical lowpass filter. The passband and stopband are not perfectly flat, the 'shoulder' between these two regions is very rounded and the transition between them, the 'roll-off' region, takes place over a wide frequency range. The closer we require our filter to agree with the ideal characteristics, the more complicated is the filter transfer function.
As the gain of a real filter does not drop vertically between the passband and stopband, we need some way of defining the 'cut-off' frequencies of filters, i.e. the effective end of the passband. The point chosen is the '−3 dB' point. This is the frequency at which the gain has fallen by 3 dB, or to 1/√2 of its maximum value (gain in dB = 20 log10 [gain]).
If you are rusty on the basic principles of analogue filters, this would be a good time to do some background reading. Some keywords to look for are: lowpass, highpass, bandstop, bandpass, cut-off frequency, roll-off, first, second (etc.) order, passive and active filters, Bode plots and dB. Howatson (1996) is just one of an abundance of circuit theory and analysis texts which will be relevant.
Much work has been carried out into the design of analogue filters and, as a result, standard design equations for analogue filters with very high specifications are available. However, as has been stressed earlier in this book, the characteristics of all analogue systems alter due to temperature changes and ageing. It is also impossible for two analogue systems to perform identically. Digital filters do not have these defects. They are also much more versatile than analogue filters in that they are programmable.
We will now look at various methods of designing digital filters.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750650489500064
Source: https://www.sciencedirect.com/topics/engineering/ideal-lowpass-filter
0 Response to "Consider a Continuoustime Ideal Low Pass Filter S Whose Frequency Response is"
Post a Comment