ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.

Slides:



Advertisements
Similar presentations
Dates for term tests Friday, February 07 Friday, March 07
Advertisements

ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Response to a Sinusoidal Input Frequency Analysis of an RC Circuit.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Periodograms Bartlett Windows Data Windowing Blackman-Tukey Resources:
Component Analysis (Review)
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
OPTIMUM FILTERING.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring /9/2004.
ELE Adaptive Signal Processing
AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
SYSTEMS Identification
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Adaptive Signal Processing
Linear Prediction Problem: Forward Prediction Backward Prediction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Pattern Recognition Feature Generation Linear Prediction Gaussian Mixture Models.
Discrete-Time Fourier Series
Algorithm Taxonomy Thus far we have focused on:
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
Properties of the z-Transform
Linear Prediction Coding (LPC)
1 CS 551/651: Structure of Spoken Language Lecture 8: Mathematical Descriptions of the Speech Signal John-Paul Hosom Fall 2008.
Linear Prediction Coding of Speech Signal Jun-Won Suh.
Speech Coding Using LPC. What is Speech Coding  Speech coding is the procedure of transforming speech signal into more compact form for Transmission.
T – Biomedical Signal Processing Chapters
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 PATTERN COMPARISON TECHNIQUES Test Pattern:Reference Pattern:
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Fourier Analysis of Discrete-Time Systems
Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
The process has correlation sequence Correlation and Spectral Measure where, the adjoint of is defined by The process has spectral measure where.
ECE 5525 Osama Saraireh Fall 2005 Dr. Veton Kepuska
EE513 Audio Signals and Systems
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
Introduction to Digital Signals
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
0 - 1 © 2007 Texas Instruments Inc, Content developed in partnership with Tel-Aviv University From MATLAB ® and Simulink ® to Real Time with TI DSPs Spectrum.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Stability Response to a Sinusoid Filtering White Noise Autocorrelation Power.
More On Linear Predictive Analysis
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Autoregressive (AR) Spectral Estimation
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: MLLR For Two Gaussians Mean and Variance Adaptation MATLB Example Resources:
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Frequency Response Response of a Sinusoid DT MA Filter Filter Design DT WMA Filter.
Linear Constant-Coefficient Difference Equations
By Sarita Jondhale 1 Signal preprocessor: “conditions” the speech signal s(n) to new form which is more suitable for the analysis Postprocessor: operate.
Linear Prediction.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
Adv DSP Spring-2015 Lecture#11 Spectrum Estimation Parametric Methods.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
PATTERN COMPARISON TECHNIQUES
LECTURE 30: SYSTEM ANALYSIS USING THE TRANSFER FUNCTION
Figure 11.1 Linear system model for a signal s[n].
Linear Prediction.
Modern Spectral Estimation
Linear Predictive Coding Methods
Linear Prediction.
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
Presentation transcript:

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin Recursions Spectral Modeling Inverse Filtering and Deconvolution Resources: ECE 4773: Into To DSP ECE 8463: Fund. Of Speech WIKI: Minimum Phase Markel and Gray: Linear Prediction Deller: DT Processing of Speech AJR: LP Modeling of Speech MC: MATLAB Demo ECE 4773: Into To DSP ECE 8463: Fund. Of Speech WIKI: Minimum Phase Markel and Gray: Linear Prediction Deller: DT Processing of Speech AJR: LP Modeling of Speech MC: MATLAB Demo URL:.../publications/courses/ece_8423/lectures/current/lecture_04.ppt.../publications/courses/ece_8423/lectures/current/lecture_04.ppt MP3:.../publications/courses/ece_8423/lectures/current/lecture_04.mp3.../publications/courses/ece_8423/lectures/current/lecture_04.mp3 LECTURE 04: LINEAR PREDICTION

ECE 8423: Lecture 04, Slide 1 Consider a p th -order linear prediction model: Without loss of generality, assume n 0 = 0. The prediction error is defined as: The Linear Prediction (LP) Model + – We can define an objective function:

ECE 8423: Lecture 04, Slide 2 Minimization of the Objective Function Differentiate w.r.t. a l : Rearranging terms: Interchanging the order of summation and expectation on the left (why?): Define a covariance function:

ECE 8423: Lecture 04, Slide 3 The Yule-Walker Equations (aka Normal Equations) We can rewrite our prediction equation as: This is known as the Yule-Walker equation. Its solution produces what we refer to as the Covariance Method for linear prediction. We can write this set of p equations in matrix form: and can easily solve for the prediction coefficients: where: Note that the covariance matrix is symmetric:

ECE 8423: Lecture 04, Slide 4 Autocorrelation Method C is a covariance matrix, which means it has some special properties:  Symmetric: under what conditions does its inverse exist?  Fast Inversion: we can factor this matrix into upper and lower triangular matrices and derive a fast algorithm for inversion known as the Cholesky decomposition. If we assume stationary inputs, we can convert covariances to correlations: This is known as the Autocorrelation Method. This matrix is symmetric, but is also Toeplitz, which means the inverse can be performed efficiently using an iterative algorithm we will introduce shortly. Note that the Covariance Method requires p(p-1)/2 unique values for the matrix, and p values for the associated vector. A fast algorithm, known as the Factored Covariance Algorithm, exists to compute C. Factored Covariance Algorithm The Autocorrelation method requires p+1 values to produce p LP coefficients.

ECE 8423: Lecture 04, Slide 5 Linear Prediction Error Recall our expression for J, the prediction error energy: We can substitute our expression for the predictor coefficients, and show: These relations are significant because they show the error obeys the same linear prediction equation that we applied to the signal. This result has two interesting implications:  Missing values of the autocorrelation function can be calculated using this relation under certain assumptions (e.g., maximum entropy).  The autocorrelation function shares many properties with the linear prediction model (e.g., minimum phase). In fact, the two representations are interchangeable.

ECE 8423: Lecture 04, Slide 6 Linear Filter Interpretation of Linear Prediction Recall our expression for the error signal: We can rewrite this using the z-Transform: This, of course, implies we can invert the process and generate the original signal from the error signal: This rather remarkable view of the process exposes some important questions about the nature of this filter:  A(z) is an FIR filter. Under what conditions is it minimum phase?  Under what conditions is the inverse, 1/A(z), stable? This implies we can view the computation of the error as a filtering process:

ECE 8423: Lecture 04, Slide 7 Residual Error To the right are some examples of the linear prediction error for voiced speech signals. The points where the prediction error peaks are points in the signal where the signal is least predictable by a linear prediction model. In the case of voiced speech, this relates to the manner in which the signal is produced. Speech compression and synthesis systems exploit the linear prediction model as a first-order attempt to remove redundancy from the signal. The LP model is independent of the energy of the input signal. It is also independent of the phase of the input signal because the LP filter is a minimum phase filter.

ECE 8423: Lecture 04, Slide 8 Durbin Recursion There are several efficient algorithms to compute the LP coefficients without doing a matrix inverse. One of the most popular and insightful is known as the Durbin recursion: The intermediate coefficients, {k i }, are referred to as reflection coefficients. To compute a p th order model, all orders from 1 to p are computed. This recursion is significant for several reasons:  The error energy decreases as the LP order increases, indicating the model continually improves.  There is a one-to-one mapping between {r i }, {k i }, and {a i }.  For the LP filter to be stable,. Note that the Autocorrelation Method guarantees the filter to be stable. The Covariance Method does not.

ECE 8423: Lecture 04, Slide 9 The Burg Algorithm Digital filters can be implemented using many different forms. One very important and popular form is a lattice filter, shown to the right. Itakura showed the {k i }’s can be computed directly: Burg demonstrated that the LP approach can be viewed as a maximum entropy spectral estimate, and derived an expression for the reflection coefficients that guarantees:. Makhoul showed that a family of lattice-based formulations exist. Most importantly, the filter coefficients can be updated in real-time in O(n).

ECE 8423: Lecture 04, Slide 10 The Autoregressive Model Suppose we model our signal as the output of a linear filter with a white noise input: The inverse LP filter can be thought of as an all-pole (IIR) filter: This is referred to as an autoregressive (AR) model. If the system is actually a mixed model, referred to as an autoregressive moving average (ARMA) model: The LP model can still approximate such a system because: Hence, even if the system has poles and zeroes, the LP model is capable of approximating the system’s overall impulse or frequency response.

ECE 8423: Lecture 04, Slide 11 Spectral Matching and Blind Deconvolution Recall our expression for the error energy: The LP filter becomes increasingly more accurate if you increase the order of the model. We can interpret this as a spectral matching process, as shown to the right. As the order increases, the LP model better models the envelope of the spectrum of the original signal. The LP model attempts to minimize the error equally across the entire spectrum. If the spectrum of the input signal has a systematic variation, such as a bandpass filter shape, or a spectral tilt, the LP model will attempt to model this. Therefore, we typically pre-whiten the signal before LP analysis. The process by which the LP filter learns the spectrum of the input signal is often referred to as blind deconvolution.

ECE 8423: Lecture 04, Slide 12 There are many interpretations and motivations for linear prediction ranging from minimum mean-square error estimation to maximum entropy spectral estimation. There are many implementations of the filter, including the direct form and the lattice representation. There are many representations for the coefficients including predictor and reflection coefficients. The LP approach can be extended to estimate the parameters of most digital filters, and can also be applied to the problem of digital filter design. The filter can be estimated in batch mode using a frame-based analysis, or it can be updated on a sample basis using a sequential or iterative estimator. Hence, the LP model is our first adaptive filter. Such a filter can be viewed as a time-varying digital filter that tracks a signal in real-time. Under appropriate Gaussian assumptions, LP analysis can be shown to be a maximum likelihood estimate of the model parameters. Further, two models can be compared using a metric called the log likelihood ratio. Many other metrics exist to compare such models, including cepstral and principal components approaches. Summary