Dept. E.E./ESAT-STADIUS, KU Leuven

Slides:



Advertisements
Similar presentations
DSP-CIS Chapter-5: Filter Realization
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Chapter 12 Simple Linear Regression
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
1 Regression Models & Loss Reserve Variability Prakash Narayan Ph.D., ACAS 2001 Casualty Loss Reserve Seminar.
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 24: Numeric Considerations and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Data mining and statistical learning - lecture 6
Lecture 11: Recursive Parameter Estimation
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
Prediction and model selection
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Simple Linear Regression Analysis
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
Dept. E.E./ESAT-STADIUS, KU Leuven homes.esat.kuleuven.be/~moonen/
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Slam is a State Estimation Problem. Predicted belief corrected belief.
Module 2: Representing Process and Disturbance Dynamics Using Discrete Time Transfer Functions.
Introduction to estimation theory Seoul Nat’l Univ.
DSP-CIS Chapter-5: Filter Realization Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Digital Audio Signal Processing Lecture-4: Noise Reduction Marc Moonen/Alexander Bertrand Dept. E.E./ESAT-STADIUS, KU Leuven
1 Part 5 Response of Linear Systems 6.Linear Filtering of a Random Signals 7.Power Spectrum Analysis 8.Linear Estimation and Prediction Filters 9.Mean-Square.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 18: Minimum Variance Estimator.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 26: Singular Value Decomposition.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
EE513 Audio Signals and Systems
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
CS Statistical Machine learning Lecture 24
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
DSP-CIS Chapter-13: Least Mean Squares (LMS) Algorithm Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
DSP-CIS Part-IV : Filter Banks & Subband Systems Chapter-12 : Frequency Domain Filtering Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Digital Audio Signal Processing Lecture-3 Noise Reduction
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Dynamic Models, Autocorrelation and Forecasting ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Nonlinear State Estimation
CY3A2 System identification Input signals Signals need to be realisable, and excite the typical modes of the system. Ideally the signal should be persistent.
Page 0 of 7 Particle filter - IFC Implementation Particle filter – IFC implementation: Accept file (one frame at a time) Initial processing** Compute autocorrelations,
Model Structures 1. Objective Recognize some discrete-time model structures which are commonly used in system identification such as ARX, FIR, ARMAX,
State-Space Recursive Least Squares with Adaptive Memory College of Electrical & Mechanical Engineering National University of Sciences & Technology (NUST)
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
ELG5377 Adaptive Signal Processing Lecture 13: Method of Least Squares.
ASEN 5070: Statistical Orbit Determination I Fall 2014
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
(5) Notes on the Least Squares Estimate
ECE 7251: Signal Detection and Estimation
PSG College of Technology
Modern Spectral Estimation
The regression model in matrix form
Principles of the Global Positioning System Lecture 13
Kalman Filter: Bayes Interpretation
16. Mean Square Estimation
Both involve matrix algebra, carry same names, similar meanings
Presentation transcript:

Dept. E.E./ESAT-STADIUS, KU Leuven Digital Audio Signal Processing Lecture-7 Kalman Filters for AEC/AFC/ANC/Noise Reduction Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven marc.moonen@kuleuven.be www.esat.kuleuven.be/stadius/

Overview Kalman Filters DASP Applications Introduction – Least Squares Parameter Estimation Standard Kalman Filter Square-Root Kalman Filter DASP Applications AEC (and AFC/ANC/..) Noise Reduction (Lecture-3)

Introduction: Least Squares Parameter Estimation In DSP-CIS, have introduced ‘Least Squares’ estimation as an alternative (based on observed data/signal samples) to optimal filter design (based on statistical information)… 

Introduction: Least Squares Parameter Estimation ‘Least Squares’ approach is also used in parameter estimation in a linear regression model, where the problem statement is as follows… Given… L vectors of input variables (=‘regressors’) L corresponding observations of a dependent variable and assume a linear regression/observation model where w0 is an unknown parameter vector (=‘regression coefficients’) and ek is unknown additive noise Then… the aim is to estimate wo Least Squares (LS) estimate is (see previous page for definitions of U and d) 

Introduction: Least Squares Parameter Estimation If the input variables uk are given/fixed (*) and the additive noise e is a random vector with zero-mean then the LS estimate is ‘unbiased’ i.e. If in addition the noise e has unit covariance matrix then the (estimation) error covariance matrix is (*) Input variables can also be random variables, possibly correlated with the additive noise, etc... Also regression coefficients can be random variables, etc…etc... All this not considered here.

Introduction: Least Squares Parameter Estimation The Mean Square Error (MSE) of the estimation is PS: This MSE is different from the one in Wiener Filtering, check formulas Under the given assumptions, it is shown that amongst all linear estimators, i.e. estimators of the form (=linear function of d) the LS estimator (with Z=(UTU)-1UT and z=0) mimimizes the MSE i.e. it is the Linear Minimum MSE (MMSE) estimator Under the given assumptions, if furthermore e is a Gaussian distributed random vector, it is shown that the LS estimator is also the (‘general’, i.e. not restricted to ‘linear’) MMSE estimator. Optional reading: https://en.wikipedia.org/wiki/Minimum_mean_square_error

Introduction: Least Squares Parameter Estimation PS-1: If noise e is zero-mean with non-unit covariance matrix where V1/2 is the lower triangular Cholesky factor (‘square root’), the Linear MMSE estimator & error covariance matrix are which corresponds to the LS estimator for the so-called pre-whitened observation model where the additive noise is indeed white.. Example: If V=σ2.I then w^ =(UTU)-1UT d with error covariance matrix σ2.(UTU)-1

Introduction: Least Squares Parameter Estimation PS-2: If an initial estimate is available (e.g. from previous observations) with error covariance matrix where P1/2 is the lower triangular Cholesky factor (‘square root’), the Linear MMSE estimator & error covariance matrix are which corresponds to the LS estimator for the model Example: P-1=0 corresponds to ∞ variance of the initial estimate, i.e. back to p.7

Introduction: Least Squares Parameter Estimation A Kalman Filter also solves a parameter estimation problem, but now the parameter vector is dynamic instead of static, i.e. changes over time The time-evolution of the parameter vector is described by the ‘state equation’ in a state-space model, and the linear regression model of p.4 then corresponds to the ‘output equation’ of the state-space model (details in next slides..) In the next slides, the general formulation of the (Standard) Kalman Filter is given In p.16 it is seen how this relates to Least Squares estimation Kalman Filters are used everywhere! (aerospace, economics, manufacturing, instrumentation, weather forecasting, navigation, …) Rudolf Emil Kálmán (1930 - )

Standard Kalman Filter State space model of a time-varying discrete-time system (PS: can also have multiple outputs) where v[k] and w[k] are mutually uncorrelated, zero mean, white noises process noise measurement noise input signal vector state vector output signal state equation output equation

Standard Kalman Filter Example: IIR filter State space model is (no process/measurement noise here) x bo b4 b3 b2 b1 + y[k] u[k] -a4 -a3 -a2 -a1 x1[k] x2[k] x3[k] x4[k]

Standard Kalman Filter State estimation problem Given… A[k], B[k], C[k], D[k], V[k], W[k], k=0,1,2,... and input/output observations u[k],y[k], k=0,1,2,... Then… estimate the internal states x[k], k=0,1,2,... state vector

Standard Kalman Filter Definition: = Linear MMSE-estimate of xk using all available data up until time l `FILTERING’ = estimate `PREDICTION’ = estimate `SMOOTHING’ = estimate PS: will use shorthand notation xk, yk ,.. instead of x[k], y[k],.. from now on

Standard Kalman Filter The ‘Standard Kalman Filter’ (or ‘Conventional Kalman Filter’) operation @ time k (k=0,1,2,..) is as follows: Given a prediction of the state vector @ time k based on previous observations (up to time k-1) with corresponding error covariance matrix Step-1: Measurement Update =Compute an improved (filtered) estimate based on ‘output equation’ @ time k (=observation y[k]) Step-2: Time Update =Compute a prediction of the next state vector based on ‘state equation’

Standard Kalman Filter The ‘Standard Kalman Filter’ formulas are as follows (without proof) Initalization: For k=0,1,2,… Step-1: Measurement Update Step-2: Time Update compare to standard RLS! Try to derive this from state equation

Standard Kalman Filter PS: ‘Standard RLS’ is a special case of ‘Standard KF’  Internal state vector is FIR filter coefficients vector, which is assumed to be time-invariant

Standard Kalman Filter PS: ‘Standard RLS’ is a special case of ‘Standard KF’ Standard RLS is not numerically stable (see DSP-CIS), hence (similarly) the Standard KF is not numerically stable (i.e. finite precision implementation diverges from infinite precision implementation) Will therefore again derive an alternative Square-Root Algorithm which can be shown to be numerically stable (i.e. distance between finite precision implementation and infinite precision implementation is bounded)

Square-Root Kalman Filter The state estimation/prediction @ time k corresponds to a parameter estimation problem in a linear regression model (p.4), where the parameter vector contains all previous state vectors…

Square-Root Kalman Filter The state estimation/prediction @ time k corresponds to a parameter estimation problem in a linear regression model (p.4), where the parameter vector contains all previous state vectors… compare to p.7

Square-Root Kalman Filter Linear MMSE explain subscript

Square-Root Kalman Filter  Triangular factor & right-hand side propagated from time k-1 to time k ..hence requires only lower-right/lower part ! (explain) is then developed as follows

Square-Root Kalman Filter A recursive algorithm based on QR-updating can then be derived, which is given as follows  Propagated from time k-1 to time k time k to time k+1 backsubstitution PS: QRD-RLS can be shown to be a special case of this

Overview Kalman Filters DASP Applications Introduction – Least Squares Parameter Estimation Standard Kalman Filter Square-Root Kalman Filter DASP Applications AEC (and AFC/ANC/..) Noise Reduction (Lecture-3)

Kalman Filter for AEC (and AFC/ANC/…) Time-invariant echo path model Echo path is assumed to be wk (=regression/state vector) xk takes the place of C[k] (!) e[k] is near-end speech, noise, modeling error,.. Kalman Filter then reduces to (standard/QRD) RLS

Kalman Filter for AEC (and AFC/ANC/…) Random walk model ‘Leaky’ random Walk Model Frequency domain version

Kalman Filter for AEC (and AFC/ANC/…) Brain teaser: For which state space model does the Kalman Filter reduce to exponentially weighted RLS ?

Kalman Filter for Noise Reduction Assume AR model of speech and noise Equivalent state-space model is… y=microphone signal u[k], w[k] = zero mean, unit variance,white noise

Kalman Filter for Noise Reduction with: s[k] and n[k] are included in state vector, hence can be estimated by Kalman Filter

Kalman Filter for Noise Reduction Iterative algorithm (details omitted) split signal in frames estimate parameters Kalman Filter reconstruct signal iterations y[k] Disadvantages iterative approach: complexity delay

Kalman Filter for Noise Reduction iteration index time index (no iterations) Sequential algorithm (details omitted) State Estimator (Kalman Filter) Parameters Estimator D