Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno Real time DSP Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
Filters conventional filters adaptive filters time-invariant fixed coefficients adaptive filters time varying variable coefficients adaptive algorithm function of incoming signal exact filtering operation is unknown or is non-stationary!
Random Processes random != deterministic concepts tools realization ensemble ergodic tools mean variance correlation/autocorrelation stationary processes & WSS
Adaptive Filters parts filter digital filter adaptive algorithm FIR IIR (stability problems are difficult to handle)
Adaptive Filters d(n) desired signal y(n) output of the filter x(n) input signal e(n) error signal
FIR Filter wl(n) adaptive filter coefficients eq. 8.2.1
Performance Function coefficients are updated to optimize some predetermined performance criterion mean-square error (MSE) for FIR R: input autocorrelation matrix p: crosscorrelation between d(n) and x(n) eq. 8.2.6 eq. 8.2.12
Performance Function MSE surface One global minimum point! Fig 8.4
Gradient Based Algorithms properties convergence speed steady-state performance computation complexity method of steepest descent greatest rate of decrease (negative gradient) iterative (recursive) eq 8.2.14
LMS Algorithm statistics of d(n) and x(n) are unknown estimation of MSE avoids explicit computation of matrix inversion, squaring, averaging or differentiating Eq 8.2.15 Eq 8.2.16 Eq 8.2.17 Eq 8.2.18
Performance Analysis stability constraint μ controls the size of the incremental correction λmax is the largest eigenvalue of the autocorrelation matrix R Px input signal power large filters => small μ strong signals => small μ Eq 8.3.1 Eq 8.3.5
Performance Analysis convergence speed large μ => fast convergence λ => relation between stability and speed of convergence estimation Eq 8.3.6 Eq 8.3.8
Performance Analysis excess mean-square error the gradient estimation prevents w from staying at wo in steady state w varies randomly about wo trade-off between the excess MSE and the speed of convergence trade-off between real-time tracking and steady-state performance Eq 8.3.9
Modified LMS Algorithms normalized LMS algorithm μ varies with input signal power optimize the speed of convergence and maintain steady-state performance independent of reference signal power c is a small constant μ(n) is bounded 0 < α < 2 eq 8.4.4
Modified LMS Algorithms leaky LMS algorithm insufficient spectral excitation may result in divergence of the weights and long term instability where v is the leakage factor 0 < v ≤ 1 equivalent of adding low-level white noise degradetion in performance (1 - v) < μ eq 8.4.5
Applications operate in an unknown enviroment track time variations identification inverse modeling prediction interference canceling
Applications adaptive system identification experimental modeling of a process or a plant fig 8.6
Applications adaptive linear prediction provides an estimate of the value of an input process at a future time in y(n) appear the highly correlated components of x(n) i. e. speech coding and separating signals from noise output is e(n) for spread spectrum corrupted by an additive narrowband interference
Applications adaptive linear prediction fig 8.7
Applications adaptive noise cancellation (ANC) most signal processing techniques are developed under noise-free assumptions the reference sensor is placed close to the noise source to sense only the noise, because noise from primary sensor and reference sensor must be correlated the reference sensor can be placed far from the primary sensor to reduce crosstalk, but it requires a large-order filter P(z) represents the transfer function between the noise source and the primary sensor uses x(n) to estimate x’(n)
Applications adaptive noise cancellation (ANC)
Applications adaptive channel equalization transmission of data is limited by distortion in the transmission channel channel transfer function C(z) design of an equalizer in the receiver that counteracts the channel distortion training of an equalizer agreed sequence by the transmitter and the receiver Decision device
Applications adaptive channel equalization
Implementation considerations finite-precision effects prevent overflow scaling of coefficients (or signal) quantization & roundoff => excess MSE => stalling of convergence depends on μ threshold of e(n) -> LSB