Presentation is loading. Please wait.

Presentation is loading. Please wait.

4.6 Correlative-Level Coding By adding ISI to the transmitted signal in a controlled manner, it is possible to achieve a signaling rate equal to the Nyquist.

Similar presentations


Presentation on theme: "4.6 Correlative-Level Coding By adding ISI to the transmitted signal in a controlled manner, it is possible to achieve a signaling rate equal to the Nyquist."— Presentation transcript:

1 4.6 Correlative-Level Coding By adding ISI to the transmitted signal in a controlled manner, it is possible to achieve a signaling rate equal to the Nyquist rate of 2W symbols/sec in a channel of bandwidth W Hz. Correlative-level coding may be regarded as a practical method of achieving the theoretical maximum signaling rate of 2 W symbols/sec in a bandwidth of W Hz using realizable and perturbation- tolerant filters.

2 DUOBINARY SIGNALING Duobinary signaling implies doubling the transmission capacity in a straight binary system. This particular form of correlative-level coding is also called class I partial response. Consider a binary input sequence {b k } applied to a pulse-amplitude modulator toproduce a two-level sequence {a k } : +1 if symbol b k is 1 a k = { (4.65) -1 if symbol b k is 0 When this sequence is applied to a duobinary encoder, it is converted into a three-level output, namely, -2, 0, and +2. In Figure 4.11, the sequence {a k } is first passed through a simple filter involving a single delay element and summer. For every impulse input, we get two impulses spaced T b seconds apart at the filter output.

3 We may express the duobinary coder output c k as the sum of the present input pulse a k and its previous value a k-1, as shown by c k = a k + a k-1 (4.66) Equ. (4.66) changes the input {a k } of uncorrelated two-level pulses into a sequence {c k } of correlated three-level pulses. Correlation between adjacent pulses may be viewed as introducing ISI into the transmitted signal in an artificial manner. For T b seconds delay element having frequency response exp(-j2  fT b ), the frequency response of the delay-line filter in Figure 4.11 is 1 + exp(-j2  fT b ).

4 Hence, the overall frequency response of this filter connected in cascade with an ideal Nyquist channel is H I (f) = H Nyquist(f) [l + exp(-j2  fT b )] = H Nyquist(f) [exp(j  fT b ) + exp(-j  fT b )] exp(-j  fT b ) = 2H Nyquist(f) cos(  fT b ) exp(-j  fT b ) (4.67) For an ideal Nyquist channel of bandwidth W = 1/2T b, we have 1, |f| < 1/2T b H Nyquist(f) = { (4.68) 0, elsewhere

5 Figure 4.11 Duobinary signaling scheme.

6 The overall frequency response of the duobinary signaling scheme has the form of a half-cycle cosine function, as shown by 2cos(  fT b ) exp(-j  fT b ), |f| < 1/2T b H I (f) = { (4.69) 0, otherwise The magnitude and phase responses are as shown in Figs. 4.12a and 4.12b. From the first line in Equ. (4.67) and H Nyquist(f) in Equ. (4.68), the impulse response corresponding to H I (f) consists of two sine pulses displaced by T b seconds with respect to each other, h I (t) = sin(  t/T b )/(  t/T b ) + sin[  (t-T b )/T b ]/[  (t-T b )/T b ] = sin(  t/T b )/(  t/T b ) – sin(  t/T b )/[  (t-T b )/T b ] = T b 2 sin(  t/T b )/[  t(T b -t)] (4.70)

7 Figure 4.12 Frequency response of the duobinary conversion filter. (a) Magnitude response. (b) Phase response.

8 The impulse response h I (t) is plotted in Figure 4.13, it has only two distinguishable values at the sampling instants. The response to an input pulse is spread over more than one signaling interval; the response in any signaling interval is "partial." The tails of h I (t) decay as 1/|t| 2, which is a faster decay rate than the l/|t| in the ideal Nyquist channel. The original two-level sequence {a k } may be detected from the duobinary-coded sequence {c k } by invoking Equ. (4.66). Let a k represent the estimate of the original pulse a k as conceived by the receiver at time t = kT b. Subtracting the previous estimate a k-1 from c k, we get a k = c k - a k-1 (4.71)

9 If c k is received without error and if the previous estimate a k-1 corresponds to a correct decision, then the current estimate a k will be correct too. The technique of using a stored estimate of the previous symbol is called decision feedback. A major drawback of this detection procedure is that once errors are made, they tend to propagate through the output because a decision on the current a k depends on the correctness of the decision made on the previous a k-1.

10 Figure 4.13 Impulse response of the duobinary conversion filter.

11 A means to avoid the error-propagation is to use precoding before the duobinary coding, as shown in Figure 4.14. The precoding operation performed on the binary data sequence {b k } converts it into another binary sequence {d k } defined by d k = b k ⊕ d k-1 (4.72) The precoded {d k } sequence is applied to a pulse-amplitude modulator, producing a corresponding two-level sequence {a k }, where a k = + l. This sequence of short pulses is next applied to the duobinary coder, thereby producing the sequence {c k } that is related to {a k } as follows: c k = a k + a k-1 (4.74) Unlike the linear operation of duobinary coding, the precoding of Equ. (4.72) is a nonlinear operation.

12 The combined use of Eqs. (4.72) and (4.74) yields 0 if data symbol b k is 1 c k = { (4.75) + 2 if data symbol b k is 0 From Equ. (4.75), the decision rule for detecting {b k } from {c k } is If |c k | < 1, say symbol b k is 1 If |c k | > 1, say symbol b k is 0 (4.76) A block diagram of the detector is shown in Figure 4.15. A useful feature is that no knowledge of any input sample other than the present one is required. Hence, error propagation cannot occur in the detector of Figure 4.15.

13 Figure 4.14 A precoded duobinary scheme; details of the duobinary coder are given in Figure 4.11.

14 Figure 4.15 Detector for recovering original binary sequence from the precoded duobinary coder output.

15 EXAMPLE 4.3 Duobinary Coding with Precodimg Consider data sequence 0010110 with an extra bit of “1” added to the precoder output for the precoding. Using Equ. (4.73), the sequence {dk} at the precoder output is as shown in row 2 of Table 4.1. The polar representation of the precoded sequence {dk} is shown in row 3 of Table 4.1. Using Equ. (4.74), we find that the duobinary coder output has the amplitude levels given in row 4 of Table 4.1.

16 Apply the decision rule of Equ. (4.76), we detect the original binary sequence given in row 5 of Table 4.1. This latter result shows that, in the absence of noise, the original binary sequence is detected correctly.

17 MODIFIED DUOBINARY SIGNALING The class IV partial response or modified duobinary technique involves a correlation span of two binary digits. This correlation is achieved by subtracting amplitude-modulated pulses spaced 2T b seconds apart, as indicated in Figure 4.16. The output of the modified duobinary conversion filter is related to the input two-level sequence {a k } at the PAM modulator output: c k = a k - a k-2 (4.77) We find that a three-level signal is generated. With a k = + l, we find that c k takes on one of three values: +2, 0, and -2.

18 The overall frequency response of the delay-line filter connected with an ideal Nyquist channel, as in Figure 4.16, is given by H IV (f) = H Nyquist(f) [l - exp(-j4  fT b )] = 2j H Nyquist(f) sin(2  fT b ) exp(-j2  fT b ) (4.78) where H Nyquist(f) is as defined in Equ. (4.68) and the overall frequency response in the form of a half-cycle sine function is 2jsin(2  fT b ) exp(-j2  fT b ), |f| < 1/2T b H IV (f) = { (4.79) 0, elsewhere

19 Figure 4.16 Modified duobinary signaling scheme.

20 The magnitude and phase responses of the modified duobinary coder are shown in Figures 4.17a and 4.17b, respectively. A useful feature is that its output has no DC component. This correlative coding exhibits the same continuity at the band edges as in duobinary signaling. From the first line of Equ. (4.78) and H Nyquist(f) in Equ. (4.68), the impulse response of the modified duobinary coder consists of two sine pulses displaced by 2T b seconds with respect to each other, h IV (t) = sin(  t/T b )/(  t/T b ) – sin[  (t-2T b )/T b ]/[  (t-2T b )/T b ] = sin(  t/T b )/(  t/T b ) – sin(  t/T b )/[  (t-2T b )/T b ] = 2T b 2 sin(  t/T b )/[  t(2T b -t)] (4.80) This impulse response is plotted in Figure 4.18; it has three levels at the sampling instants. As with duobinary signaling, the tails of h IV (t) for the modified duobinary signaling decay as 1/|t| 2.

21 Figure 4.17 Frequency response of the modified duobinary conversion filter. (a) Magnitude response. (b) Phase response.

22 To eliminate error propagation in modified duobinary system, prior to the generation of the modified duobinary signal, a modulo-2 addition is used on signals 2T b seconds apart, as shown by d k = b k ⊕ d k-2 symbol 1, if either b k or d k-2 is 1 = { (4.81) symbol 0, otherwise where {b k } is the incoming data sequence and {d k } is the precoder output. The precoded sequence {d k } is then applied to a PAM modulator and then to the modified duobinary conversion filter.

23 In Figure 4.16, the output digit c k equals -2, 0, or +2, if PAM modulator assumes a polar representation for the precoded {d k }. The detected d k digit at the receiver output may be extracted from c k by disregarding the polarity of c k. We therefore formulate the following decision rule: If |c k | > 1, say symbol b k is 1 If |c k | < 1, say symbol b k is 0 (4.82) As with the duobinary signaling, we note the following: ● In the absence of channel noise, the detected {b k } is exactly the same as the original {b k } at the transmitter input. ● The use of Equ. (4.81) requires two extra bits to the precoded {a k }. The composition of the decoded {b k } using Equ. (4.82) is invariant to the selection for these two bits.

24 Figure 4.18 Impulse response of the modified duobinary conversion filter.

25 4.7 Baseband M-ary PAM Transmission In a baseband M-ary PAM system, the pulse-amplitude modulator produces one of M possible amplitude levels with M > 2. This form of pulse modulation is illustrated in Figure 4.20a for the case of quaternary (M = 4) system and the binary data sequence 0010110111. The waveform shown in Figure 4.20a is based on the electrical representation for each of the four possible dibits given in Figure 4.20b. This representation is Gray encoded, any dibit in the quaternary alphabet differs from an adjacent dibit in a single bit position. A signal alphabet in M-ary PAM system contains M equally likely and statistically independent symbols, with symbol duration T seconds. The signaling rate 1/T is expressed in symbols per second, or bauds.

26 Figure 4.20 Output of a quaternary system. (a) Waveform. (b) Representation of the 4 possible dibits, based on Gray encoding.

27 The binary PAM produces information at the rate of 1/T b bps. In quaternary PAM system, the four possible symbols may be identified with the dibits 00, 01, 10, and 11. Each symbol represents 2 bits of information, and 1 baud is equal to 2 bps. In M-ary PAM system, 1 baud = log 2 M bps, and the symbol duration T is related to the bit duration T b of the equivalent binary PAM system as T = T b log 2 M (4.84) By using M-ary PAM, transmit information rate can be log 2 M faster than the corresponding binary PAM. For the same average probability of symbol error, M-ary PAM requires more transmitted power. For M >> 2 and a symbol error probability small compared to 1, the transmitted power must be increased by the factor M 2 /log 2 M, compared to a binary PAM system.

28 In a baseband M-ary system, the sequence of information symbols is converted into an M-level PAM pulse train by a pulse- amplitude modulator. This pulse signal waveform, shaped by a transmit filter and transmitted over the communication channel, is corrupted with both noise and distortion. The received signal passed through a receive filter is sampled at an appropriate rate in synchronism with the transmitter. Each sample is compared with preset threshold values, and a decision is made as to which symbol was transmitted. The pulse-amplitude modulator and the decision-making device in an M-ary PAM are more complex than those in a binary PAM system. ISI, noise, and imperfect synchronization cause errors to appear at the receiver output. The transmit and receive filters are designed to minimize these errors.

29 4.8 Optimum Linear Receiver In Figure 4.7, the two channel conditions are treated separately:  Channel noise acting alone, which led to formulation of the matched filter receiver.  ISI acting alone, which led to formulation of the pulse-shaping transmit filter to realize the Nyquist channel. In one approach to the design of a linear receiver, the receiver is viewed as a zero-forcing equalizer followed by a decision-making device. The objective of this form of equalization is to have the "ISI forced to zero" at instants t = kT, except for k = 0 where the desired symbol is assumed. Then the symbol-to-symbol detection is assured to be optimal with the Nyquist criterion, provided that the channel noise w(t) is zero.

30 The zero-forcing equalizer ignores the effect of channel noise w(t). It leads to overall performance degradation due to noise enhancement, a phenomenon that is an inherent feature of zero-forcing equalization. A more refined approach is to use the mean-square error criterion, which provides balanced solution to reduce effects of channel noise and ISI. Refer to the baseband binary data transmission system of Figure 4.7, the receive filter characterized by the impulse response c(t) produces the response due to the channel output x(t): y(t) =∫c(t) x(t-  ) d  (4.88)

31 The channel output x(t) is itself defined by x(t) = Σ k a k q(t – kT b ) + w(t) (4.89) where a k is the symbol transmitted at t = kT b, and w(t) is the channel noise. The time function q(t) is the convolution of two impulse responses: the pulse-shaping transmit filter g(t), and the channel h(t). Substituting Equ. (4.89) into (4.88) and sampling the output y(t) at t = iT b, we may write y(iT b ) =  i + n i (4.90) where x i is the signal component defined by  i = Σ k a k ∫c(  )q(iT b - kT b -  ) d  (4.91)

32 and n i is the noise component defined by n i =∫c(  )w(iT b -  ) d  (4.92) The condition for receiver’s perfect operation is to have y(iT b ) = a i, where a i is the transmitted symbol. Deviation from this condition results in the error signal e i = y(iT b ) - a i =  i + n i - a i (4.93) Accordingly, we may define the mean-square error as J = (1/2)E(e i 2 ) (4.94) Substituting Equ. (4.93) into (4.94) and then expanding terms, we get J = E[  i 2 ]/2 + E[n i 2 ]/2 + E[a i 2 ]/2 + E[  i n i ] - E[n i a i ] - E[  i a i ] (4.95)

33 1.In a stationary environment the mean-square term E[  i 2 ] is independent of the instant of time t = iT b at which the receive filter output is sampled. Hence, we may simplify the expression of this term by writing E[  i 2 ] = Σ l Σ k E[a l a k ]∫∫c(  1 )c(  2 ) q(lT b -  1 ) q(kT b -  2 ) d  1 d  2 Assuming that the binary symbols a k = + l as in Equ. (4.42) and the transmitted symbols are statistically independent, 1, for l = k E[a l a k ] = { (4.96) 0, otherwise

34 We may reduce the mean-square term E[  i 2 ] to be E[  i 2 ] =∫∫Rq(  1,  2 ) c(  1 )c(  2 ) d  1 d  2 (4.97) where R q (  1,  2 ) = Σ k q(kT b -  1 ) q(kT b -  2 ) (4.98) The factor R q (  1,  2 ) is the temporal autocorrelation function of {q(kT b )}. Stationarity of this sequence means that R q (  1,  2 ) = R q (  2 -  1 ) = R q (  1 -  2 )

35 2. The mean-square term E[n i 2 ] due to channel noise is (using Equ. (4.92)) E[n i 2 ] =∫∫c(  1 )c(  2 ) E[w(iT b -  1 ) w(iT b -  2 )] d  1 d  2 =∫∫c(  1 )c(  2 ) R W (  2 -  1 ) d  1 d  2 (4.99) where R W (  2 -  1 ) is the ensemble-averaged autocorrelation function of the channel noise w(t). With w(t) assumed to be white with power spectral density N o /2, we have R W (  2 -  1 ) = (N o /2)  (  2 -  1 ) (4.100) Hence the expression for E[n i 2 ] simplifies to E[n i 2 ] = (N o /2)∫∫c(  1 )c(  2 )  (  2 -  1 ) d  1 d  2 (4.101)

36 3. The mean-square term E[a i 2 ] due to the transmitted symbol a i is unity by virtue of Equ. (4.96); that is, E[a i 2 ] = 1 for all i (4.102) 4. The expectation of the cross-product term  i n i is zero for two reasons:  i and n i are independent, channel noise w(t), and n i, has zero mean; E[  i n i ] = 0 for all i (4.103) 5. The expectation of the cross-product term n i a i is also zero; that is, E[n i a i ] = 0 for all i (4.104)

37 6. The expectation of the cross-product term  i a i is given by E[  i a i ] = Σ k E[a k a i ]∫c(  ) q(iT b – kT b -  ) d  (4.105) By virtue of the statistical independence of the transmitted symbols described in Equ. (4.96), this expectation reduces to E[  i a i ] =∫c(  ) q(-  ) d  (4.106) Thus substituting Eqs. (4.97), (4.101) to (4.104) and (4.106) into (4.95), the mean-square error J for the data transmission system of Figure 4.7 will be J = (1/2) + (1/2)∫∫[R q (t-  ) + (N o /2)  (t-  )] c(t)c(  ) dt d  -∫c(t)q(-t) dt (4.107) For easy presentation,  1 and  2 in the first integral are replaced by t and , and  is replaced with t in the second integral.

38 This expression for the mean-square error J is actually normalized with the variance of symbol a k by virtue of assumption made in Equ. (4.96). Differentiating Equ. (4.107) with respect to the impulse response c(t) of the receive filter, and then setting the result equal to zero, we get ∫[R q (t-  ) + (N o /2)  (t-  )] c(  ) d  = q(-t) (4.108) Equ. (4.108) is the formula for finding the impulse response c(t) of the MMSE equalizer optimized in the mean-square error sense. Taking the Fourier transform of both sides of Equ. (4.108), [S q (f) + (N o /2)]C(f) = Q*(f) (4.109)

39 where c(t) C(f), q(t) Q(f), and R q (t) S q (f). Solving Equ. (4.109) for C(f), we get C(f) = Q*(f)/[S q (f) + (N o /2)] (4.110) The power spectral density of the sequence {q(kT b )} can be expressed as S q (f) = (1/T b ) Σ k |Q(f + (k/T b ))| 2 (4.111) which means that the frequency response C(f) of the optimum linear receiver is periodic with period 1/T b.

40 Equ. (4.110) suggests the interpretation of the optimum linear receiver as the cascade connection of two basic components:  A matched filter with impulse response q(-t), where q(t) = g(t)*h(t).  A transversal (tapped-delay-line) equalizer whose frequency response is the inverse of the periodic function S q (f) + (N o /2). To implement Equ. (4.110) we need an equalizer of infinite length. We may approximate the optimum solution by using an equalizer with a finite set of coefficients {c k } k=-N, provided N is large enough. Thus the receiver takes the form shown in Figure 4.27. The block labeled z-1 in Figure 4.27 introduces a delay equal to T b, means that the tap spacing is the same as the bit duration T b.

41 4.9 Adaptive Equalization Figure 4.28 shows the structure of an adaptive synchronous equalizer, which incorporates the matched filtering action. Prior to data transmission, the equalizer is adjusted under the guidance of a training sequence transmitted through the channel. A synchronized version of this training sequence is generated at the receiver, where it is applied to the equalizer as the desired response. A training sequence commonly used in practice is the PN sequence which consists of a deterministic periodic sequence with noise-like characteristics. Two identical PN sequence generators are used, one at the transmitter and the other at the receiver. When the training process is completed, the PN sequence generator is switched off, and the adaptive equalizer is ready for normal data transmission.

42 Figure 4.28 Block diagram of adaptive equalizer.

43 LEAST-MEAN-SQUARE ALGORITHM To simplify notational matters, we let x[n] = x[nT] y[n] = y[nT] The output y[n] of the tapped-delay-line equalizer in response to the input sequence {x[n]} is defined by the discrete convolution sum (see Figure 4.28) y[n] = Σ k=0 w k x[n-k] (4.112) where w k is the weight at the k-th tap, and N + 1 is the total number of taps.

44 The adaptation may be achieved by observing the error between the desired pulse shape and the actual pulse shape at the filter output, and then using this error to estimate the direction in which the tap- weights of the filter should be changed so as to approach an optimum set of values. For the adaptation, we may use a criterion based on minimizing the peak distortion, defined as the worst-case ISI at the output of the equalizer. The equalizer is optimum when the peak distortion at its input is < 100%. An adaptive equalizer based on the mean-square error criterion appears to be less sensitive to timing perturbations than one based on the peak distortion criterion.

45 Let a[n] denote the desired response of the n-th transmitted symbol. Let e[n] denote the error signal defined as the difference between the desired response a[n] and the actual response y[n] of the equalizer, as shown by e[n] = a[n] - y[n] (4.113) In the LMS algorithm for adaptive equalization, the error signal e[n] actuates the adjustments applied to the individual tap weights of the equalizer as the algorithm proceeds from one iteration to the next. From Figure 4.28, the input signal applied to the k-th tap-weight at time step n is x[n – k]. Using w k (n) as the old value of the k-th tap- weight at time step n, the tap-weight at time step n + 1 is, in light of Equ. (4.114), w k [n+ 1] = w k [n]+  x[n - k] e[n], k = 0, 1,..., N (4.115) where e[n] = a[n] - Σ k=0 w k [n] x[n - k] (4.116)

46 We may simplify the LMS algorithm using matrix notation. Let the (N + 1)-by-1 vector x[n] denote the tap-inputs of the equalizer: x[n] = [x[n],..., x[n - N + 1], x[n - N]] T (4.117) where the superscript T denotes matrix transposition. Let the (N + 1)-by-1 vector w[n] denote the tap-weights of the equalizer: w[n] = [w0[n], w1[n],..., wN[n]] T (4.118) Using matrix notation, the convolution sum of Equ. (4.112) is recast into the inner product of the vectors x[n] and w[n]: y[n] = x T [n] w[n] (4.119)

47 The LMS algorithm for adaptive equalization is as follows: 1.Initialize the algorithm by setting w[l] = 0 (i.e., set all the tap-weights of the equalizer to zero at n = 1, which corresponds to time t = T). 2. For n = 1, 2,..., compute y[n] = x T [n] w[n] e[n] = a[n] – y[n] w[n + 1] = w[n] +  e[n] x[n] where  is the step-size parameter.

48 3. Continue the iterative computation until the equalizer reaches a "steady state," by which the actual mean-square error of the equalizer essentially reaches a constant value. The LMS algorithm is an example of a feedback system, as illustrated in Figure 4.29, which pertains to the k-th filter coefficient. Provided that the step-size parameter m is assigned a small value, after a large number of iterations the behavior of the LMS algorithm is roughly similar to that of the steepest-descent algorithm, which uses the actual gradient rather than a noisy estimate for the computation of the tap-weights.

49 Figure 4.29 Signal-flow graph representation of the LMS algorithm involving the kth tap weight.

50 OPERATION OF THE EQUALIZER There are two modes of operation for adaptive equalizer: the training mode and decision-directed mode, as shown in Figure 4.30. During the training mode, a known PN sequence is transmitted and a synchronized version is generated in the receiver, where it is applied to the adaptive equalizer as the desired response; the tap-weights of the equalizer are adjusted with the LMS algorithm. When the training process is completed, the adaptive equalizer is switched to the decision-directed mode. In this mode of operation, the error signal is defined by e[n] = a[n] – y[n] (4.120) where y[n] is the equalizer output at time t = nT, and a[n] is the final correct estimate of the transmitted symbol a[n].

51 An adaptive equalizer operating in a decision-directed mode is able to track relatively slow variations in channel characteristics. It turns out that the larger the step-size parameter , the faster the tracking capability of the adaptive equalizer. A large step-size m may result in high excess mean-square error. The choice of the step-size parameter m involves compromise between fast tracking and reducing the excess mean-square error.

52 Figure 4.30 Illustrating the two operating modes of an adaptive equalizer: For the training mode, the switch is in position 1; and for the tracking mode, it is moved to position 2.

53 DECISION-FEEDBACK EQUALIZATION Consider a baseband channel with impulse response denoted by the sequence {h[n]}, where h[n] = h[nT]. The channel response to the input sequence {x[n]} is given by the discrete convolution sum y[n] = Σ k h[k] x[n - k] (4.121) = h[0] x[n] + Σ k=0 h[k] x[n - k] + Σ k>0 h[k] x[n - k] The 1st term of Equation (4.121) represents the desired data symbol. The 2nd term is due to the precursors of the channel impulse response that occur before the main sample h(0) of the desired data symbol. The 3rd term is due to the postcursors of the channel impulse response that occur after the main sample h(0).

54 The precursors and postcursors of channel impluse response are as in Figure 4.31. The idea of decision-feedback equalization is to use data decisions made on the basis of precursors to take care of the postcursors. A DFE consists of a feedforward section, a feedback section, and a decision device connected together as shown in Figure 4.32. The feedforward section consists of a tapped-delay-line filter whose taps are spaced at the reciprocal of the signaling rate. The data sequence to be equalized is applied to this section.

55 The feedback section consists of another tapped-delay-line filter whose taps are also spaced at the reciprocal of the signaling rate. The input applied to the feedback section consists of the decisions made on previously detected symbols of the input sequence. The feedback section is to subtract out that portion of the ISI produced by previously detected symbols from the estimates of future samples. The mean-square error criterion can be used to obtain a tractable optimization of a decision-feedback equalizer. The LMS algorithm can be used to jointly adapt both the feedforward and the feedback tap-weights based on a common error signal. When the frequency response of a linear channel is characterized by severe amplitude distortion or relatively sharp amplitude cutoff, the DFE offers significant improvement in performance over a linear equalizer for an equal number of taps.

56 Unlike a linear equalizer, a DFE suffers from error propagation. Despite that the DFE is a feedback system, error propagation will not persist indefinitely. Decision errors tend to occur in bursts. ► Let L denote the number of taps in the feedback section of a DFE. After a sequence of L consecutive correct decisions, all decision errors in the feedback section will be flushed out. This points to an error propagation of finite duration. ► When a decision error is made, the probability of the next decision being erroneous too is clearly no worse than 1/2. ► Let K denote the duration of error propagation, that is, the number of symbols needed to make L consecutive correct decisions. The average error rate is (K/2)P o, where K/2 is the average number of errors produced by a single decision error, and P o is the error probability for the past L decisions being all correct.

57 The effect of error propagation in a DFE is to increase the average error rate by a factor approximately equal to 2 L, compared to the probability of making the first error. For example, for L = 3 the average error rate is increased by less than an order of magnitude due to error propagation.

58 Figure 4.31 Impulse response of a discrete-time channel, depicting the precursors and postcursors.

59 Figure 4.32 Block diagram of decision-feedback equalizer.


Download ppt "4.6 Correlative-Level Coding By adding ISI to the transmitted signal in a controlled manner, it is possible to achieve a signaling rate equal to the Nyquist."

Similar presentations


Ads by Google