ELG5377 Adaptive Signal Processing Lecture 13: Method of Least Squares.

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
Linear regression models
Simple Regression. Major Questions Given an economic model involving a relationship between two economic variables, how do we go about specifying the.
OPTIMUM FILTERING.
The General Linear Model. The Simple Linear Model Linear Regression.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Environmental Data Analysis with MatLab Lecture 23: Hypothesis Testing continued; F-Tests.
Read Chapter 17 of the textbook
Lesson #32 Simple Linear Regression. Regression is used to model and/or predict a variable; called the dependent variable, Y; based on one or more independent.
Linear and generalised linear models
Linear regression models in matrix terms. The regression function in matrix terms.
Introduction to Regression Analysis, Chapter 13,
Adaptive Signal Processing
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Calibration & Curve Fitting
Objectives of Multiple Regression
STAT 497 LECTURE NOTES 2.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
CISE301_Topic41 CISE301: Numerical Methods Topic 4: Least Squares Curve Fitting Lectures 18-19: KFUPM Read Chapter 17 of the textbook.
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Instructor Longin Jan Latecki C22: The Method of Least Squares.
Applied Quantitative Analysis and Practices LECTURE#23 By Dr. Osman Sadiq Paracha.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
EE513 Audio Signals and Systems
EE3561_Unit 4(c)AL-DHAIFALLAH14351 EE 3561 : Computational Methods Unit 4 : Least Squares Curve Fitting Dr. Mujahed Al-Dhaifallah (Term 342) Reading Assignment.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Dept. E.E./ESAT-STADIUS, KU Leuven
I271B QUANTITATIVE METHODS Regression and Diagnostics.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Discrete-time Random Signals
Recursive Least-Squares (RLS) Adaptive Filters
ELG5377 Adaptive Signal Processing Lecture 15: Recursive Least Squares (RLS) Algorithm.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Adv DSP Spring-2015 Lecture#11 Spectrum Estimation Parametric Methods.
The “Big Picture” (from Heath 1995). Simple Linear Regression.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Lecturer: Ing. Martina Hanová, PhD..  How do we evaluate a model?  How do we know if the model we are using is good?  assumptions relate to the (population)
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
The simple linear regression model and parameter estimation
ELG5377 Adaptive Signal Processing
Regression Chapter 6 I Introduction to Regression
Ch3: Model Building through Regression
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Linear Prediction.
Simple Linear Regression - Introduction
Introduction to Instrumentation Engineering
CONCEPTS OF ESTIMATION
Modern Spectral Estimation
6-1 Introduction To Empirical Models
The regression model in matrix form
Regression Models - Introduction
Singular Value Decomposition SVD
The Simple Linear Regression Model: Specification and Estimation
Regression Lecture-5 Additional chapters of mathematics
16. Mean Square Estimation
Introduction to Regression
Regression Models - Introduction
Approximation of Functions
Presentation transcript:

ELG5377 Adaptive Signal Processing Lecture 13: Method of Least Squares

Introduction Given a sequence of observations x(1), x(2), …, x(N) which occur at time t 1, t 2, …, t N. –The requirement is to construct a curve that is used to fit these points in some optimum fashion. –Let us denote this curve as f(t i ). –The objective is to minimize the squares of the differences between f(t i ) and x(i). J = Σ(f(t i )-x(i)) 2. The method of least squares can be viewed as an alternative to Wiener filters –Wiener filters are based on ensemble averages –MLS is deterministic in approach and is based on time averages.

Statement of the Linear Least-Squares Estimation Problem Consider a physical phenomena that is characterized by two sets of variables –d(i) and x(i). The variable d(i) is observed at time t i in response to the subset of variables x(i), x(i-1), … x(i-M+1). The response d(i) is modelled by –where w ok are unknown parameters of the model and e o (i) represents a measurement error. –The measurement error is an unobservable random variable that is introduced to the model to account for its inaccuracy. (1)

Statement of the Linear Least-Squares Estimation Problem 2 It is customary to assume that the measurement error is white with 0 mean and variance  2. The implication of this assumption is that –where the values of x(i), x(i-1), …, x(i-M+1) are all known. –Hence the mean of the response d(i) is uniquely determined by the model.

Statement of the Linear Least-Squares Estimation Problem 3 The problem we have to solve is to estimate the unknown parameters w ok of the multiple linear regression model of (1) given the two observable sets of variables x(i) and d(i) for i= 1, 2, …, N. To do this we use the linear transversal filter shown below as the model of interest, whose output is y(i) and we use d(i) as the desired response. … w0*w0* w1*w1* w M-1 * + + y(i)y(i) d(i)d(i) e(i)e(i) - x(i)x(i) x(i-1)x(i-M+1)

Statement of the Linear Least-Squares Estimation Problem 4 Must chose tap weights so as to minimize this cost function

Data Windowing Typically, we are given data for i= 1 to N, where N > M. It is only at time M, where d(i) is a function of known data. Also, for i > N, d(i) has unknown data in its equation. Covariance method –No assumptions on unknown data, therefore i 1 = M and i 2 = N. Autocorrelation method –i 1 = 1 and i 2 = N+M-1. We assume that x(i) = 0 for i N. Prewindowing and postwindowing. We will use covariance method.

Orthogonality Principle Revisited (2)

Orthogonality Principle Revisited 2 Let e(i) = e min (i) if we select w 0, w 1, …, w M-1 such that J is minimized. Then from (2) –The minimum-error time series is orthogonal to the time series x(i-k) applied to tap k of a transversal filter of length M for k = 1, 2, …,M-1 when the filter is operating in its least-squares condition. Let y min (i) be the output of the filter when it is operating in its least squares condition. d(i) = y min (i)+e min (i).

Energy of Desired

Normal Equations and Linear Least-Squares Filters Let the filter described by the tap weights, w 0, w 1, …, w M-1 be operating in its least-squares condition. –Therefore k = 0, 1, …, M-1

Normal Equations and Linear Least-Squares Filters 2

Normal Equations and Linear Least-Squares Filters: Matrix formulation

Example x(1)=2, x(2) = 1, x(3) = -0.5, x(4) = 1.2 d(2) = 0.5, d(3) = 1, d(4) = 0 Find the two-tap LS filter.