CHAPTER 3 RECURSIVE ESTIMATION FOR LINEAR MODELS

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

Linear Regression.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
CHAPTER 13 M ODELING C ONSIDERATIONS AND S TATISTICAL I NFORMATION “All models are wrong; some are useful.”  George E. P. Box Organization of chapter.
CHAPTER 3 CHAPTER 3 R ECURSIVE E STIMATION FOR L INEAR M ODELS Organization of chapter in ISSO –Linear models Relationship between least-squares and mean-square.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Chapter 2: Lasso for linear models
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
The loss function, the normal equation,
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
The Simple Linear Regression Model: Specification and Estimation
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Stochastic Approximation Neta Shoham. References This Presentation is totally based on the book Introduction to Stochastic Search and Optimization (2003)
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
Prediction and model selection
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering

Collaborative Filtering Matrix Factorization Approach
Equalization in a wideband TDMA system
Algorithm Taxonomy Thus far we have focused on:
Introduction to Adaptive Digital Filters Algorithms
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO* –Background Motivation Finite sample and asymptotic (continuous)
CHAPTER 4 S TOCHASTIC A PPROXIMATION FOR R OOT F INDING IN N ONLINEAR M ODELS Organization of chapter in ISSO –Introduction and potpourri of examples Sample.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
CHAPTER 5 S TOCHASTIC G RADIENT F ORM OF S TOCHASTIC A PROXIMATION Organization of chapter in ISSO –Stochastic gradient Core algorithm Basic principles.
CHAPTER 6 STOCHASTIC APPROXIMATION AND THE FINITE-DIFFERENCE METHOD
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO –Background Motivation Finite sample and asymptotic (continuous)
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Dept. E.E./ESAT-STADIUS, KU Leuven
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Adaptive Control Loops for Advanced LIGO
Overview of Adaptive Filters Quote of the Day When you look at yourself from a universal standpoint, something inside always reminds or informs you that.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Dynamic Models, Autocorrelation and Forecasting
ASEN 5070: Statistical Orbit Determination I Fall 2014
The Simple Linear Regression Model: Specification and Estimation
Boosting and Additive Trees
Equalization in a wideband TDMA system
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Instructor :Dr. Aamer Iqbal Bhatti
Chapter 16 Adaptive Filters
DSP-CIS Chapter-8: Introduction to Optimal & Adaptive Filters
Biological and Artificial Neuron
Collaborative Filtering Matrix Factorization Approach
Biological and Artificial Neuron
لجنة الهندسة الكهربائية
Ch2: Adaline and Madaline
Instructor :Dr. Aamer Iqbal Bhatti
METHOD OF STEEPEST DESCENT
Biological and Artificial Neuron
The Simple Linear Regression Model: Specification and Estimation
CHAPTER 12 STATISTICAL METHODS FOR OPTIMIZATION IN DISCRETE PROBLEMS
Learning Theory Reza Shadmehr
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Unfolding with system identification
DSP-CIS Chapter-12: Least Mean Squares (LMS) Algorithm
Presentation transcript:

CHAPTER 3 RECURSIVE ESTIMATION FOR LINEAR MODELS Slides for Introduction to Stochastic Search and Optimization (ISSO) by J. C. Spall CHAPTER 3 RECURSIVE ESTIMATION FOR LINEAR MODELS Organization of chapter in ISSO Linear models Relationship between least-squares and mean-square LMS and RLS estimation Applications in adaptive control LMS, RLS, and Kalman filter for time-varying solution Case study: Oboe reed data

Basic Linear Model Consider estimation of vector  in model that is linear in  Model has classical linear form where zk is kth measurement, hk is corresponding “design vector,” and vk is unknown noise value Model used extensively in control, statistics, signal processing, etc. Many estimation/optimization criteria based on “squared-error”-type loss functions Leads to criteria that are quadratic in  Unique (global) estimate 

Least-Squares Estimation Most common method for estimating  in linear model is by method of least squares Criterion (loss function) has form where Zn = [z1, z2 ,…, zn]T and Hn is n  p concatenated matrix of hkT row vectors Classical batch least-squares estimate is Popular recursive estimates (LMS, RLS, Kalman filter) may be derived from batch estimate

Geometric Interpretation of Least-Squares Estimate when p = 2 and n = 3

Recursive Estimation Batch form not convenient in many applications E.g., data arrive over time and want “easy” way to update estimate at time k to estimate at time k+1 Least-mean-squares (LMS) method is very popular recursive method Stochastic analogue of steepest descent algorithm LMS recursion: Convergence theory based on stochastic approximation (e.g., Ljung, et al., 1992; Gerencsér, 1995) Less rigorous theory based on connections to steepest descent (ignores noise) (Widrow and Stearns, 1985; Haykin, 1996)

LMS in Closed-Loop Control Suppose process is modeled according to autoregressive (AR) form: where xk represents state,  and i are unknown parameters, uk is control, and wk is noise Let target (“desired”) value for xk be dk Optimal control law known (minimizes mean-square tracking error): Certainty equivalence principle justifies substitution of parameter estimates for unknown true parameters LMS used to estimate  and i in closed-loop mode

LMS in Closed-Loop Control for First-Order AR Model

Recursive Least Squares (RLS) Alternative to LMS is RLS Recall LMS is stochastic analogue of steepest descent (“first order” method) RLS is stochastic analogue of Newton-Raphson (“second order” method)  faster convergence than LMS in practice RLS algorithm (2 recursions): Need P0 and to initialize RLS recursions

Recursive Methods for Estimation of Time-Varying Parameters It is common to have the underlying true  evolve in time (e.g., target tracking, adaptive control, sequential experimental design, etc.) Time-varying parameters implies  replaced with k Consider modified linear model Prototype recursive form for estimating k is where choice of Ak and k depends on specific algorithm

Three Important Algorithms for Estimation of Time-Varying Parameters LMS Goal is to minimize instantaneous squared-error criteria across iterations General form for evolution of true parameters k RLS Goal is to minimize weighted sum of squared errors Sum criterion creates “inertia” not present in LMS General form for evolution of k Kalman filter Minimizes instantaneous squared-error criteria Requires precise statistical description of evolution of k via state-space model Details for above algorithms in terms of prototype algorithm (previous slide) are in Section 3.3 of ISSO

Case Study: LMS and RLS with Oboe Reed Data …an ill wind that nobody blows good. —Comedian Danny Kaye in speaking of the oboe in the “The Secret Life of Walter Mitty” (1947) Section 3.4 of ISSO reports on linear and curvilinear models for predicting quality of oboe reeds Linear model has 7 parameters; curvilinear has 4 parameters This study compares LMS and RLS with batch least-squares estimates 160 data points for fitting models (reeddata-fit ); 80 (independent) data points for testing models (reeddata-test) reeddata-fit and reeddata-test data sets available from ISSO Web site

Oboe with Attached Reed

Comparison of Fitting Results for reeddata-fit and reeddata-test To test similarity of fit and test data sets, performed model fitting using test data set This comparison is for checking consistency of the two data sets; not for checking accuracy of LMS or RLS estimates Compared model fits for parameters in Basic linear model (eqn. (3.25) in ISSO) (p = 7) Curvilinear model (eqn. (3.26) in ISSO) (p = 4) Results on next slide for basic linear model

Comparison of Batch Parameter Estimates for Basic Linear Model Comparison of Batch Parameter Estimates for Basic Linear Model. Approximate 95% Confidence Intervals Shown in [·, ·]

Comparison of Batch and RLS with Oboe Reed Data Compared batch and RLS using 160 data points in reeddata-fit and 80 data points for testing models in reeddata-test Two slides to follow present results First slide compares parameter estimates in pure linear model Second slide compares prediction errors for linear and curvilinear models

Batch and RLS Parameter Estimates for Basic Linear Model (Data from reeddata-fit )

Mean and Median Absolute Prediction Errors for the Linear and Curvilinear Models (Model fits from reeddata-fit; Prediction Errors from reeddata-test)