Download presentation
Presentation is loading. Please wait.
Published byJesse Potter Modified over 9 years ago
1
Least SquaresELE 774 - Adaptive Signal Processing 1 Method of Least Squares
2
ELE 774 - Adaptive Signal Processing2 Least Squares Method of Least Squares: Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The outputs y(1), y(2),..., y(N) are observed Find a model which fits the input-output relation to a (linear?) curve, f(n,u(n)) ‘best’ fit by minimising the sum of the squres of the difference f - y
3
ELE 774 - Adaptive Signal Processing3 Least Squares The curve fitting problem can be formulated as Error: Sum-of-error-squares: Minimum (least-squares of error) is achieved when the gradient is zero model observations variable
4
ELE 774 - Adaptive Signal Processing4 Least Squares Problem Statement For the inputs to the system, u(i) The observed desired response is, d(i) Relation is assumed to be linear Unobservable measurement error Zero mean White Then deterministic
5
ELE 774 - Adaptive Signal Processing5 Least Squares Problem Statement Design a transversal filter which finds the least squares solution Then, sum of error squares is
6
ELE 774 - Adaptive Signal Processing6 Least Squares Data Windowing We will express the input in matrix form Depending on the limits i 1 and i 2 this matrix changes Covariance Method i 1 =M, i 2 =N Prewindowing Method i 1 =1, i 2 =N Postwindowing Method i 1 =M, i 2 =N+M1 Autocorr. Method i 1 =1, i 2 =N+M1
7
ELE 774 - Adaptive Signal Processing7 Least Squares Error signal Least squares (minimum of sum of squares) is achieved when i.e., when The minimum-error time series e min (i) is orthogonal to the time series of the input u(i-k) applied to tap k of a transversal filter of length M for k=0,1,...,M-1 when the filter is operating in its least-squares condition. Principle of Orthogonality !Time averaging! (For Wiener filtering) (this was ensemble average)
8
ELE 774 - Adaptive Signal Processing8 Least Squares Corollary of Principle of Orthogonality LS estimate of the desired response is Multiply principle of orthogonality by w k * and take summation over k Then When a transversal filter operates in its least-squares condition, the least-squares estimate of the desired response -produced at the output of the filter- and the minimum estimation error time series are orthogonal to each other over time i.
9
ELE 774 - Adaptive Signal Processing9 Least Squares Energy of Minimum Error Due to the principle of orthogonality, second and third terms are orthogonal, hence where, when e o (i)= 0 for all i, impossible, when the problem is underdetermined fewer data points than parameters infinitely many solutions (no unique soln.)!
10
ELE 774 - Adaptive Signal Processing10 Least Squares Normal Equations Hence, Expanded system of the normal equations for linear least-squares filters. Minimum error: Principle of Orthogonality → (t,k), 0≤(t,k) ≤M-1 time-average autocorrelation function of the input z(-k), 0 ≤k ≤M-1 time-average cross-correlation bw the desired response and the input
11
ELE 774 - Adaptive Signal Processing11 Least Squares Normal Equations (Matrix Formulation) Matrix form of the normal equations for linear least-squares filters: Linear least-squares counterpart of the Wiener-Hopf eqn.s. Here and z are time averages, whereas in Wiener-Hopf eqn.s they were ensemble averages. (if -1 exists!)
12
ELE 774 - Adaptive Signal Processing12 Least Squares Minimum Sum of Error Squares Energy contained in the time series is Or, Then the minimum sum of error squares is
13
ELE 774 - Adaptive Signal Processing13 Least Squares Properties of the Time-Average Correlation Matrix Property I: The correlation matrix is Hermitian symmetric, Property II: The correlation matrix is nonnegative definite, Property III: The correlation matrix is nonsingular iff det( ) is nonzero Property IV: The eigenvalues of the correlation matrix are real and non-negative.
14
ELE 774 - Adaptive Signal Processing14 Least Squares Properties of the Time-Average Correlation Matrix Property V: The correlation matrix is the product of two rectangular Toeplitz matrices that are Hermitian transpose of each other.
15
ELE 774 - Adaptive Signal Processing15 Least Squares Normal Equations (Reformulation) But we know that which yields Substituting into the minimum sum of error squares expression gives then ! Pseudo-inverse !
16
ELE 774 - Adaptive Signal Processing16 Least Squares Projection The LS estimate of d is given by The matrix is a projection operator onto the linear space spanned by the columns of data matrix A i.e. the space U i. The orthogonal complement projector is
17
ELE 774 - Adaptive Signal Processing17 Least Squares Projection - Example M=2 tap filter, N=4 → N-M+1=3 Let Then And orthogonal
18
ELE 774 - Adaptive Signal Processing18 Least Squares Projection - Example
19
ELE 774 - Adaptive Signal Processing19 Least Squares Uniqueness of the LS Solution LS always has a solution, is that solution unique? The least-squares estimate is unique if and only if the nullity (the dimension of the null space) of the data matrix A equals zero. A KxM, (K=N-M+1) Solution is unique when A is of full column rank, K≥M All columns of A are linearly independent Overdetermined system (more eqns. than variables (taps)) (A H A) -1 nonsingular → exists and unique Infinitely many solutions when A has linearly dependent columns, K<M (A H A) -1 is singular
20
ELE 774 - Adaptive Signal Processing20 Least Squares Properties of the LS Estimates Property I: The least-squares estimate is unbiased, provided that the measurement error process e o (i) has zero mean. Property II: When the measurement error process e o (i) is white with zero mean and variance 2, the covariance matrix of the least- squares estimate equals 2 -1. Property III: When the measurement error process e o (i) is white with zero mean, the least squares estimate is the best linear unbiased estimate. Property IV: When the measurement error process e o (i) is white and Gaussian with zero mean, the least-squares estimate achieves the Cramer-Rao lower bound for unbiased estimates.
21
ELE 774 - Adaptive Signal Processing21 Least Squares Computation of the LS Estimates The rank (W) of an KxN (K≥N or K<N) matrix A gives The number of linearly independent columns/rows The number of non-zero eigenvalues/singular values The matrix is said to be full rank (full column or row rank) if Otherwise, it is said to be rank-deficient Rank is an important parameter for matrix inversion If K=N (square matrix) and the matrix is full rank (W=K=N) (non- singular) inverse of the matrix can be calculated, A -1 =adj(A)/det(A) If the matrix is not square (K≠N), and/or it is rank-deficient (singular), A -1 does not exist, instead we can use the pseudo-inverse (a projection of the inverse), A +
22
ELE 774 - Adaptive Signal Processing22 Least Squares SVD We can calculate the pseudo-inverse using SVD. Any KxN matrix (K≥N or K<N) can be decomposed using the Singular Value Decomposition (SVD) as follows:
23
ELE 774 - Adaptive Signal Processing23 Least Squares SVD The system of eqn.s, is overdetermined if K>N, more eqn.s than unknowns, Unique solution (if A is full-rank) Non-unique, infinitely many solutions (if A is rank-deficient) is underdetermined if K<N, more unknowns than eqn.s, Non-unique, infinitely many solutions In either case the solution(s) is(are) where
24
ELE 774 - Adaptive Signal Processing24 Least Squares Computation of the LS Estimates Find the solution of (A: KxM) If K>M and rank(A)=M, ( ) the unique solution is Otherwise, infinitely many solutions, but pseudo-inverse gives the minimum-norm solution to the least squares problem. Shortest length possible in the Euclidean norm sense.
25
ELE 774 - Adaptive Signal Processing25 Least Squares Minimum-Norm Solution We know that Then min is achieved when where min is determined by c 2 (desired response, uncontrollable) min is independent of b 2 !
26
ELE 774 - Adaptive Signal Processing26 Least Squares Minimum-Norm Solution Then the optimum filter coefficients become Norm of filter coeff.s is (V H V=I) which is minimum when then Even when, the vector is unique in the sense that it is the only tap-weight vector that simultaneously satisfy Minimum sum-of-error-squares (LS solution) The smallest Euclidean norm possible. Hence, is called the minimum-norm LS solution. ≥0
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.