ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.

Slides:



Advertisements
Similar presentations
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Response to a Sinusoidal Input Frequency Analysis of an RC Circuit.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
Adaptive IIR Filter Terry Lee EE 491D May 13, 2005.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ELE Adaptive Signal Processing
AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Function Optimization Newton’s Method. Conjugate Gradients
Tutorial 12 Unconstrained optimization Conjugate gradients.
Prediction and model selection
Adaptive FIR Filter Algorithms D.K. Wise ECEN4002/5002 DSP Laboratory Spring 2003.
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
Linear Prediction Problem: Forward Prediction Backward Prediction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Equalization in a wideband TDMA system
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Stability and the s-Plane Stability of an RC Circuit 1 st and 2 nd.
Algorithm Taxonomy Thus far we have focused on:
Properties of the z-Transform
Introduction to Adaptive Digital Filters Algorithms
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
Discriminant Functions
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
Unit-V DSP APPLICATIONS. UNIT V -SYLLABUS DSP APPLICATIONS Multirate signal processing: Decimation Interpolation Sampling rate conversion by a rational.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
1 Lecture 1: February 20, 2007 Topic: 1. Discrete-Time Signals and Systems.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
ECE 8443 – Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional Likelihood Mutual Information Estimation (CMLE) Maximum MI Estimation.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
Lecture 10b Adaptive Filters. 2 Learning Objectives  Introduction to adaptive filtering.  LMS update algorithm.  Implementation of an adaptive filter.
DSP-CIS Part-IV : Filter Banks & Subband Systems Chapter-12 : Frequency Domain Filtering Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Overview of Adaptive Filters Quote of the Day When you look at yourself from a universal standpoint, something inside always reminds or informs you that.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Autoregressive (AR) Spectral Estimation
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Eigenfunctions Fourier Series of CT Signals Trigonometric Fourier.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Difference Equations Transfer Functions Block Diagrams Resources:
Chapter 6 Discrete-Time System. 2/90  Operation of discrete time system 1. Discrete time system where and are multiplier D is delay element Fig. 6-1.
Linear Prediction.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Eigenfunctions Fourier Series of CT Signals Trigonometric Fourier Series Dirichlet.
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
Digital Signal Processing Lecture 6 Frequency Selective Filters
LECTURE 11: Advanced Discriminant Analysis
Lattice Struture.
Equalization in a wideband TDMA system
Linear Prediction.
Equalization in a wideband TDMA system
Chapter 6 Discrete-Time System
METHOD OF STEEPEST DESCENT
Linear Prediction.
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Presentation transcript:

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures Properties The Gradient Adaptive Lattice Joint Process Estimator Resources: CNX: IIR Adaptive Filters KP: Bibliography MH: Adaptive Filter Applet CNX: IIR Adaptive Filters KP: Bibliography MH: Adaptive Filter Applet URL:.../publications/courses/ece_8423/lectures/current/lecture_10.ppt.../publications/courses/ece_8423/lectures/current/lecture_10.ppt MP3:.../publications/courses/ece_8423/lectures/current/lecture_10.mp3.../publications/courses/ece_8423/lectures/current/lecture_10.mp3 LECTURE 10: IIR AND LATTICE ADAPTIVE FILTERS

ECE 8423: Lecture 10, Slide 1 Motivation Thus far we have concentrated on FIR filters and transversal implementations (e.g., direct form or tapped delay lines). An FIR filter may need many taps to model a smooth frequency response that can be described by an all-pole filter (e.g., voice or music). This leads to a very high computational cost. A recursive structure can achieve the same performance at a significantly reduced computational cost. But, we must worry about stability (e.g., poles outside the unit circle). The performance surface associated with an IIR adaptive filter is not guaranteed to be unimodal, leading to convergence to local minima in MSE. The basic form of a recursive filter is: We can define an extended coefficient vector: and an extended data vector: + –

ECE 8423: Lecture 10, Slide 2 Minimization of the Error As usual, we can set up a minimization of the mean-squared error: and differentiate the error with respect to the filter coefficients: Since e(n) is only dependent on  through y(n): This minimization is more complex because y(n-j) are also dependent on  (i), so we need to separate things dependent on h() and f():

ECE 8423: Lecture 10, Slide 3 Minimization of the Error (Cont.) Note that this follows because h() and f() are independent. Continuing for f(): Combining these two equations: which is a recursive update for the gradient. Substituting into : which are recursive forms of the normal equations. Direct solution of these equations produces a nonlinear set of equations because of the recursive nature of the derivatives. An alternate approach is to try an iterative solution.

ECE 8423: Lecture 10, Slide 4 Iterative Solution Using Steepest Descent We seek an update of the form: We replace by the instantaneous version: We can summarize the Recursive LMS Algorithm (IIRLMS): Note that in addition to the approximation for the error, we have an approximation for the derivative that is a function of time.

ECE 8423: Lecture 10, Slide 5 Computational Simplification The update for the derivative of y(n) requires N+L N th order recursions. If we assume the coefficients are slowly-varying, we can further reduce computations. Separating components once again for f() and h(): Approximating these derivatives using  1 (n) and  2 (n): We reduce N+L recursions to just 2 recursions to compute the derivatives. Simpler approximations exist where the gradient is simply approximated by the instantaneous error:

ECE 8423: Lecture 10, Slide 6 Stability Considerations To determine the stability of the filter, we can use our Levinson-Durbin recursion discussed previously. If the adaptation step-size is small, we need not perform this check every sample to reduce its computational burden. Alternately, we introduce a scale factor to push the poles away from the unit circle and closer to the origin: This factor broadens the bandwidth of the poles. Of course, such heuristics introduce bias and slow convergence.

ECE 8423: Lecture 10, Slide 7 Lattice Structure Adaptive Filters We can implement the IIR filter using a lattice filter structure. This has some interesting benefits, including a way to directly estimate the filter coefficients. We can form a prediction of x(n): The prediction error is defined as: It is also possible to define a “backwards predictor”: We can show that minimization of the forward and backward errors produce identical results: The errors can be computed in an iterative fashion:

ECE 8423: Lecture 10, Slide 8 Lattice Structure Adaptive Filters (Cont.) Updates for the filter coefficients can be derived using the orthogonality principle: The iterative procedure can be summarized as: For stationary inputs: and. These are the reflection coefficients we previously discussed. We also previously discussed ways to convert between reflection and prediction coefficients.

ECE 8423: Lecture 10, Slide 9 The Gradient Adaptive Lattice (GAL) Filter We can derive an adaptive algorithm for estimating the reflection coefficients by considering the following minimization: We can invoke a steepest descent update: We can compute the gradient of the instantaneous error: We can invoke our update equations for the error terms: and derive expressions for the derivatives:

ECE 8423: Lecture 10, Slide 10 The Gradient Adaptive Lattice (GAL) Filter (Cont.) We can substitute these into our expression for the gradient: Combining results: Notes: 1)We initially set the reflection coefficients to zero, which means: 2)We can use a time-varying adaptation constant: 3)GAL generally converges faster than LMS at a rate which is independent of the eigenvalue spread of the input, but has slightly higher bias. 4)There are several other popular approaches for updating the coefficients based on second derivatives.

ECE 8423: Lecture 10, Slide 11 Joint Process Estimator The lattice structure we described was equivalent to an all-pole, or AR, filter. We can combine the FIR and all-pole structures to produce an ARMA filter. We can think about the estimation process as a two- step process. The coefficients of the lattice structure are adapted as previously described. The coefficients for the forward structure are also adapted as before: These types of filters have been successfully applied to a variety of channel equalization problems, such as high-speed modems.

ECE 8423: Lecture 10, Slide 12 Introduced an IIR adaptive filter (IIRLMS). Derived update equations for the case of an all-pole filter. Discussed a simplified version of this filter for stationary inputs (a linear prediction-based lattice filter). Generalized this to a joint process estimator, or ladder filter, that is capable of learning a filter with poles and zeroes. Summary