AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.

Slides:



Advertisements
Similar presentations
Autocorrelation and Heteroskedasticity
Advertisements

State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Dates for term tests Friday, February 07 Friday, March 07
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
OPTIMUM FILTERING.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
Autar Kaw Humberto Isaza
ELE Adaptive Signal Processing
AMI 4622 Digital Signal Processing
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Differential Equations MTH 242 Lecture # 11 Dr. Manshoor Ahmed.
Motion Analysis Slides are from RPI Registration Class.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
SYSTEMS Identification
Development of Empirical Models From Process Data
Chapter 11 Multiple Regression.
19. Series Representation of Stochastic Processes
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Basics of regression analysis
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
AGC DSP AGC DSP Professor A G Constantinides 1 Digital Filter Specifications Only the magnitude approximation problem Four basic types of ideal filters.
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
Linear Prediction Problem: Forward Prediction Backward Prediction
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Regression Analysis British Biometrician Sir Francis Galton was the one who used the term Regression in the later part of 19 century.
Calibration & Curve Fitting
Yuan Chen Advisor: Professor Paul Cuff. Introduction Goal: Remove reverberation of far-end input from near –end input by forming an estimation of the.
Boyce/DiPrima 9th ed, Ch 8.4: Multistep Methods Elementary Differential Equations and Boundary Value Problems, 9th edition, by William E. Boyce and Richard.
Speech Coding Using LPC. What is Speech Coding  Speech coding is the procedure of transforming speech signal into more compact form for Transmission.
T – Biomedical Signal Processing Chapters
Copyright © 2001, S. K. Mitra Tunable IIR Digital Filters We have described earlier two 1st-order and two 2nd-order IIR digital transfer functions with.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
System Function of discrete-time systems
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Professor A G Constantinides 1 Interpolation & Decimation Sampling period T, at the output Interpolation by m: Let the OUTPUT be [i.e. Samples exist at.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
AGC DSP AGC DSP Professor A G Constantinides©1 Eigenvector-based Methods A very common problem in spectral estimation is concerned with the extraction.
Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
EE513 Audio Signals and Systems
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
1 Example 1 Evaluate Solution Since the degree 2 of the numerator equals the degree of the denominator, we must begin with a long division: Thus Observe.
Meeting 19 System of Linear Equations. Linear Equations A solution of a linear equation in n variables is a sequence of n real numbers s 1, s 2,..., s.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Estimation Method of Moments (MM) Methods of Moment estimation is a general method where equations for estimating parameters are found by equating population.
Chapter 7 The Laplace Transform
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Autoregressive (AR) Spectral Estimation
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Geology 6600/7600 Signal Analysis 23 Oct 2015
Linear Prediction.
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
Analysis of financial data Anders Lundquist Spring 2010.
Adv DSP Spring-2015 Lecture#11 Spectrum Estimation Parametric Methods.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Figure 11.1 Linear system model for a signal s[n].
Linear Prediction.
Modern Spectral Estimation
Linear Predictive Coding Methods
Numerical Analysis Lecture14.
Linear Prediction.
16. Mean Square Estimation
Presentation transcript:

AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process some time into the future as  The function may be linear or non-linear. We concentrate only on linear prediction functions

AGC DSP AGC DSP Professor A G Constantinides©2 A Prediction Problem Linear Prediction dates back to Gauss in the 18 th century. Extensively used in DSP theory and applications (spectrum analysis, speech processing, radar, sonar, seismology, mobile telephony, financial systems etc) The difference between the predicted and actual value at a specific point in time is caleed the prediction error.

AGC DSP AGC DSP Professor A G Constantinides©3 A Prediction Problem The objective of prediction is: given the data, to select a linear function that minimises the prediction error. The Wiener approach examined earlier may be cast into a predictive form in which the desired signal to follow is the next sample of the given process

AGC DSP AGC DSP Professor A G Constantinides©4 Forward & Backward Prediction If the prediction is written as Then we have a one-step forward prediction If the prediction is written as Then we have a one-step backward prediction

AGC DSP AGC DSP Professor A G Constantinides©5 Forward Prediction Problem The forward prediction error is then Write the prediction equation as And as in the Wiener case we minimise the second order norm of the prediction error

AGC DSP AGC DSP Professor A G Constantinides©6 Forward Prediction Problem Thus the solution accrues from Expanding we have Differentiating with resoect to the weight vector we obtain

AGC DSP AGC DSP Professor A G Constantinides©7 Forward Prediction Problem However And hence or

AGC DSP AGC DSP Professor A G Constantinides©8 Forward Prediction Problem On substituting with the correspending correlation sequences we have Set this expression to zero for minimisation to yield

AGC DSP AGC DSP Professor A G Constantinides©9 Forward Prediction Problem These are the Normal Equations, or Wiener- Hopf, or Yule-Walker equations structured for the one-step forward predictor In this specific case it is clear that we need only know the autocorrelation propertities of the given process to determine the predictor coefficients

AGC DSP AGC DSP Professor A G Constantinides©10 Forward Prediction Filter Set And rewrite earlier expression as These equations are sometimes known as the augmented forward prediction normal equations

AGC DSP AGC DSP Professor A G Constantinides©11 Forward Prediction Filter The prediction error is then given as This is a FIR filter known as the prediction-error filter

AGC DSP AGC DSP Professor A G Constantinides©12 Backward Prediction Problem In a similar manner for the backward prediction case we write And Where we assume that the backward predictor filter weights are different from the forward case

AGC DSP AGC DSP Professor A G Constantinides©13 Backward Prediction Problem Thus on comparing the the forward and backward formulations with the Wiener least squares conditions we see that the desirable signal is now Hence the normal equations for the backward case can be written as

AGC DSP AGC DSP Professor A G Constantinides©14 Backward Prediction Problem This can be slightly adjusted as On comparing this equation with the corresponding forward case it is seen that the two have the same mathematical form and Or equivalently

AGC DSP AGC DSP Professor A G Constantinides©15 Backward Prediction Filter Ie backward prediction filter has the same weights as the forward case but reversed. This result is significant from which many properties of efficient predictors ensue. Observe that the ratio of the backward prediction error filter to the forward prediction error filter is allpass. This yields the lattice predictor structures. More on this later

AGC DSP AGC DSP Professor A G Constantinides©16 Levinson-Durbin Solution of the Normal Equations The Durbin algorithm solves the following Where the right hand side is a column of as in the normal equations. Assume we have a solution for Where

AGC DSP AGC DSP Professor A G Constantinides©17 Levinson-Durbin For the next iteration the normal equations can be written as Where Set Is the k-order counteridentity

AGC DSP AGC DSP Professor A G Constantinides©18 Levinson-Durbin Multiply out to yield Note that Hence Ie the first k elements of are adjusted versions of the previous solution

AGC DSP AGC DSP Professor A G Constantinides©19 Levinson-Durbin The last element follows from the second equation of Ie

AGC DSP AGC DSP Professor A G Constantinides©20 Levinson-Durbin The parameters are known as the reflection coefficients. These are crucial from the signal processing point of view.

AGC DSP AGC DSP Professor A G Constantinides©21 Levinson-Durbin The Levinson algorithm solves the problem In the same way as for Durbin we keep track of the solutions to the problems

AGC DSP AGC DSP Professor A G Constantinides©22 Levinson-Durbin Thus assuming, to be known at the k step, we solve at the next step the problem

AGC DSP AGC DSP Professor A G Constantinides©23 Levinson-Durbin Where Thus

AGC DSP AGC DSP Professor A G Constantinides©24 Lattice Predictors Return to the lattice case. We write or

AGC DSP AGC DSP Professor A G Constantinides©25 Lattice Predictors The above transfer function is allpass of order M. It can be thought of as the reflection coeffient of a cascade of lossless transmission lines, or acoustic tubes. In this sense it can furnish a simple algorithm for the estimation of the reflection coefficients. We strat with the observation that the transfer function can be written in terms of another allpass filter embedded in a first order allpass structure

AGC DSP AGC DSP Professor A G Constantinides©26 Lattice Predictors This takes the form Where is to be chosen to make of degree (M-1). From the above we have

AGC DSP AGC DSP Professor A G Constantinides©27 Lattice Predictors And hence Where Thus for a reduction in the order the constant term in the numerator, which is also equal to the highest term in the denominator, must be zero.

AGC DSP AGC DSP Professor A G Constantinides©28 Lattice Predictors This requirement yields The realisation structure is

AGC DSP AGC DSP Professor A G Constantinides©29 Lattice Predictors There are many rearrangemnets that can be made of this structure, through the use of Signal Flow Graphs. One such rearrangement would be to reverse the direction of signal flow for the lower path. This would yield the standard Lattice Structure as found in several textbooks (viz. Inverse Lattice) The lattice structure and the above development are intimately related to the Levinson-Durbin Algorithm

AGC DSP AGC DSP Professor A G Constantinides©30 Lattice Predictors The form of lattice presented is not the usual approach to the Levinson algorithm in that we have developed the inverse filter. Since the denominator of the allpass is also the denominator of the AR process the procedure can be seen as an AR coefficient to lattice structure mapping. For lattice to AR coefficient mapping we follow the opposite route, ie we contruct the allpass and read off its denominator.

AGC DSP AGC DSP Professor A G Constantinides©31 PSD Estimation It is evident that if the PSD of the prediction error is white then the prediction transfer function multiplied by the input PSD yields a constant. Therefore the input PSD is determined. Moreover the inverse prediction filter gives us a means to generate the process as the output from the filter when the input is white noise.