Linear Prediction Problem: Forward Prediction Backward Prediction

Slides:



Advertisements
Similar presentations
State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Advertisements

Dates for term tests Friday, February 07 Friday, March 07
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Linear Equations in Linear Algebra
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
OPTIMUM FILTERING.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ELE Adaptive Signal Processing
AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Lecture 19: Discrete-Time Transfer Functions
SYSTEMS Identification
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Section 8.3 – Systems of Linear Equations - Determinants Using Determinants to Solve Systems of Equations A determinant is a value that is obtained from.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Linear Prediction Coding of Speech Signal Jun-Won Suh.
Week 2ELE Adaptive Signal Processing 1 STOCHASTIC PROCESSES AND MODELS.
1 Part 5 Response of Linear Systems 6.Linear Filtering of a Random Signals 7.Power Spectrum Analysis 8.Linear Estimation and Prediction Filters 9.Mean-Square.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
T – Biomedical Signal Processing Chapters
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
1 PCM & DPCM & DM. 2 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.
EE Audio Signals and Systems Linear Prediction Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
EE513 Audio Signals and Systems
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
EEE 503 Digital Signal Processing Lecture #2 : EEE 503 Digital Signal Processing Lecture #2 : Discrete-Time Signals & Systems Dr. Panuthat Boonpramuk Department.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
More On Linear Predictive Analysis
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Autoregressive (AR) Spectral Estimation
1 Copyright © Cengage Learning. All rights reserved. Functions 3.
Recursive Least-Squares (RLS) Adaptive Filters
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Review and Summary Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
Project-Final Presentation Blind Dereverberation Algorithm for Speech Signals Based on Multi-channel Linear Prediction Supervisor: Alexander Bertrand Authors:
Geology 6600/7600 Signal Analysis 23 Oct 2015
Linear Prediction.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
Copyright © Cengage Learning. All rights reserved. Functions.
Figure 11.1 Linear system model for a signal s[n].
The Inverse of a Square Matrix
Inverse of a Square Matrix
Linear Prediction.
Modern Spectral Estimation
Linear Predictive Coding Methods
Chapter 6 Discrete-Time System
METHOD OF STEEPEST DESCENT
Linear Prediction.
16. Mean Square Estimation
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Presentation transcript:

Linear Prediction Problem: Forward Prediction Backward Prediction Observing Predict Backward Prediction

Forward Linear Prediction Problem: Forward Prediction Observing the past Predict the future i.e. find the predictor filter taps wf,1, wf,2,...,wf,M

Forward Linear Prediction Use Wiener filter theory to calculate wf,k Desired signal Then forward prediction error is (for predictor order M) Let minimum mean-square prediction error be

One-step predictor Prediction-error filter

Forward Linear Prediction A structure similar to Wiener filter, same approach can be used. For the input vector with the autocorrelation Find the filter taps where the cross-correlation bw. the filter input and the desired response is

Forward Linear Prediction Solving the Wiener-Hopf equations, we obtain and the minimum forward-prediction error power becomes In summary,

Relation bw. Linear Prediction and AR Modelling Note that the Wiener-Hopf equations for a linear predictor is mathematically identical with the Yule-Walker equations for the model of an AR process. If AR model order M is known, model parameters can be found by using a forward linear predictor of order M. If the process is not AR, predictor provides an (AR) model approximation of order M of the process.

Forward Prediction-Error Filter We wrote that Let Then

Augmented Wiener-Hopf Eqn.s for Forward Prediction Let us combine the forward prediction filter and forward prediction-error power equations in a single matrix expression, i.e. and Define the forward prediction-error filter vector Then or Augmented Wiener-Hopf Eqn.s of a forward prediction-error filter of order M.

Example – Forward Predictor (order M=1) For a forward predictor of order M=1 Then where But a1,0=1, then

Backward Linear Prediction Problem: Forward Prediction Observing the future Predict the past i.e. find the predictor filter taps wb,1, wb,2,...,wb,M ?

Backward Linear Prediction Desired signal Then backward prediction error is (for predictor order M) Let minimum-mean square prediction error be

Backward Linear Prediction Problem: For the input vector with the autocorrelation Find the filter taps where the cross-correlation bw. the filter input and the desired response is

Backward Linear Prediction Solving the Wiener-Hopf equations, we obtain and the minimum forward-prediction error power becomes In summary,

Relations bw. Forward and Backward Predictors Compare the Wiener-Hopf eqn.s for both cases (R and r are same) ? order reversal complex conjugate

Backward Prediction-Error Filter We wrote that Let Then but we found that

Backward Prediction-Error Filter forward prediction-error filter backward prediction-error filter For stationary inputs, we may change a forward prediction-error filter into the corresponding backward prediction-error filter by reversing the order of the sequence and taking the complex conjugation of them.

Augmented Wiener-Hopf Eqn.s for Backward Prediction Let us combine the backward prediction filter and backward prediction-error power equations in a single matrix expression, i.e. and With the definition Then or Augmented Wiener-Hopf Eqn.s of a backward prediction-error filter of order M.

Levinson-Durbin Algorithm Solve the following Wiener-Hopf eqn.s to find the predictor coef.s One-shot solution can have high computation complexity. Instead, use an (order)-recursive algorithm Levinson-Durbin Algorithm. Start with a first-order (m=1) predictor and at each iteration increase the order of the predictor by one up to (m=M). Huge savings in computational complexity and storage.

Levinson-Durbin Recursion How to calculate am and κm? Start with the relation bw. correlation matrix Rm+1 and the forward-error prediction filter am. We have seen how to partition the correlation matrix indicates order indicates dim. of matrix/vector

Levinson-Durbin Recursion Multiply the order-update eqn. by Rm+1 from the left Term 1: but we know that (augmented Wiener-Hopf eqn.s) Then 1 2

Levinson-Durbin Recursion Term 2: but we know that (augmented Wiener-Hopf eqn.s) Then

Levinson-Durbin Recursion Then we have from the first line from the last line As iterations increase Pm decreases

Levinson-Durbin Recursion - Interpretations final value of the prediction error power κm: reflection coef.s due to the analogy with the reflection coef.s corresponding to the boundary bw. two sections in transmission lines The parameter Δm represents the crosscorrelation bw. the forward prediction error and the delayed backward prediction error Since f0(n)= b0(n)= u(n) HW: Prove this!

Application of the Levinson-Durbin Algorithm Find the forward prediction error filter coef.s am,k, given the autocorrelation sequence {r(0), r(1), r(2)} m=0 m=1 m=M=2

Properties of the prediction error filters Property 1: There is a one-to-one correspondence bw. the two sets of quantities {P0, κ1, κ2, ... ,κM} and {r(0), r(1), ..., r(M)}. If one set is known the other can directly be computed by:

Properties of the prediction error filters Property 2a: Transfer function of a forward prediction error filter Utilizing Levinson-Durbin recursion but we also have Then

Properties of the prediction error filters Property 2b: Transfer function of a backward prediction error filter Utilizing Levinson-Durbin recursion Given the reflection coef.s κm and the transfer functions of the forward and backward prediction-error filters of order m-1, we can uniquely calculate the corresponding transfer functions for the forward and backward prediction error filters of order m.

Properties of the prediction error filters Property 3: Both the forward and backward prediction error filters have the same magnitude response Property 4: Forward prediction-error filter is minimum-phase. causal and has stable inverse. Property 5: Backward prediction-error filter is maximum-phase. non-causal and has unstable inverse.

Properties of the prediction error filters u(n) Property 6: Forward prediction-error filter is a whitening filter. We have seen that a forward prediction-error filter can estimate an AR model (analysis filter). analysis filter synthesis filter

Properties of the prediction error filters Property 7: Backward prediction errors are orthogonal to each other. ( are white) Proof Hint: Comes from principle of orthogonality, i.e.:

Lattice Predictors A very efficient structure to implement the forward/backward predictors. Rewrite the prediction error filter coef.s The input signal to the predictors {u(n), n(n-1),...,u(n-M)} can be stacked into a vector Then the output of the predictors are (forward) (backward)

Lattice Predictors Forward prediction-error filter First term Second term Combine both terms

Lattice Predictors Similarly, Backward prediction-error filter First term Second term Combine both terms

Lattice Predictors Forward and backward prediction-error filters in matrix form and Last two equations define the m-th stage of the lattice predictor

Lattice Predictors For m=0 we have , hence for M stages

Homework, Code & Report Due to: 9 May https://engineering.purdue.edu/VISE/ee438L/lab9/pdf/lab9b.pdf