Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.

Slides:



Advertisements
Similar presentations
Data Modeling and Least Squares Fitting COS 323. Data Modeling Given: data points, functional form, find constants in functionGiven: data points, functional.
Advertisements

Solving Linear Equations  Many applications require solving systems of linear equations.  One method for solving linear equations is to write a matrix.
Numeriska beräkningar i Naturvetenskap och Teknik Today’s topic: Approximations Least square method Interpolations Fit of polynomials Splines.
Systems of Linear Equations (see Appendix A.6, Trucco & Verri) CS485/685 Computer Vision Prof. George Bebis.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Lagrange Multipliers OBJECTIVES  Find maximum and minimum values using Lagrange.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
4.4 Application--OLS Estimation. Background When we do a study of data and are looking at the relationship between 2 variables, and have reason to believe.
Data mining in 1D: curve fitting
Solving Quadratic Equations Algebraically Lesson 2.2.
Chapter 5 Orthogonality
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Curve Fitting and Interpolation: Lecture (IV)
Chapter 1 Introduction The solutions of engineering problems can be obtained using analytical methods or numerical methods. Analytical differentiation.
Curve-Fitting Regression
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Lecture 12 Projection and Least Square Approximation Shang-Hua Teng.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Wireless Sensor Networks 17th Lecture Christian Schindelhauer.
Lecture 12 Least Square Approximation Shang-Hua Teng.
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Chapter 10 Real Inner Products and Least-Square (cont.)
Principles of Least Squares
Calibration & Curve Fitting
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
Physics 114: Lecture 17 Least Squares Fit to Polynomial
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Chapter 9 Function Approximation
Section 2: Finite Element Analysis Theory
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Curve Fitting and Interpolation: Lecture (I)
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Orthogonality and Least Squares
Curve-Fitting Regression
By Adam Mallen.  What is it?  How is it different from regression?  When would you use it?  What can go wrong?  How do we find the interpolating.
Solving Quadratic Equations by Factoring MATH 018 Combined Algebra S. Rook.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Numerical Computation Lecture 9: Vector Norms and Matrix Condition Numbers United International College.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
ACSL, POSTECH1 MATLAB 입문 CHAPTER 8 Numerical calculus and differential equations.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Polynomials, Curve Fitting and Interpolation. In this chapter will study Polynomials – functions of a special form that arise often in science and engineering.
Chapter 13. General Least-Squares and Nonlinear Regression Gab-Byung Chae.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Chapter 15 General Least Squares and Non- Linear.
ABE425 Engineering Measurement Systems Ordinary Least Squares (OLS) Fitting Dr. Tony E. Grift Dept. of Agricultural & Biological Engineering University.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Chapter 12: Data Analysis by linear least squares Overview: Formulate problem as an over-determined linear system of equations Solve the linear system.
Section 8 Numerical Analysis CSCI E-24 José Luis Ramírez Herrán October 20, 2015.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Lecture 16: Image alignment
Solving quadratics methods
FE Exam Tutorial
Chapter 12 Curve Fitting : Fitting a Straight Line Gab-Byung Chae
Eigenvalues and Eigenvectors
Physics 114: Lecture 14 Linear Fitting
Linear regression Fitting a straight line to observations.
5.2 Least-Squares Fit to a Straight Line
Discrete Least Squares Approximation
Eigenvalues and Eigenvectors
Chapter 12: Data Analysis by linear least squares
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Scientific Computing Linear Least Squares

Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding a function (usually a polynomial) that passes through these data points.

Interpolation vs Approximation Given a set of (x,y) data points, Approximation is the process of finding a function (usually a line or a polynomial) that comes the “closest” to the data points. Data has “noise” – cannot find interpolating line.

General Least Squares Idea Given data and a class of functions F, the goal is to find the “best” function f in F that approximates the data. Consider the data to be two vectors of length n + 1. That is, x = [x 0 x 1 … x n ] t and y = [y 0 y 1 … y n ] t

General Least Squares Idea Definition: The error, or residual, of a given function with respect to this data is the vector r = y - f(x): That is r = [r 0 r 1 … r n ] t where r i = y i - f(x i ) We want to find a function f such that the error is made as small as possible. How do we measure the size of the error? With vector norms.

Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). Our previous examples for vectors in R n : Manhattan Euclidean Chebyshev For Least Squares, the best norm to use will be the 2- norm.

Ordinary Least Squares Definition The least-squares best approximating function to a set of data, x, y from a class of functions, F, is the function f * in F that minimizes the 2-norm of the error. That is, if f * is the least squares best approximating function, then This is often called the ordinary least squares method of approximating data.

Linear Least Squares Linear Least Squares: We assume that the class of functions is the class of all possible lines. By the method in the last slide, we then want to find a linear function f(x) = ax + b that minimizes the 2- norm of the vector y-f(x), i.e. that minimizes:

Linear Least Squares Linear Least Squares: To minimize it is enough to minimize the term inside the square root. So, if f(x) = ax+b, we need to minimize over all possible values of a and b. From calculus, we know that the minimum will occur where the partial derivatives with respect to a and b are zero.

Linear Least Squares Linear Least Squares: These equations can be written as: These last two equations are called the Normal Equations for the best line fit to the data.

Linear Least Squares Linear Least Squares: Note that these are two equations in the unknowns a and b. Let Then, the solution is

Matrix Formulation of Linear Least Squares Linear Least Squares: Want to minimize Let Then, we want to find a vector c that minimizes the length squared of the error vector Ac-y (or y – Ac) That is, we want to minimize This is equivalent to minimizing the Euclidean distance from y to Ac.

Matrix Formulation of Linear Least Squares Linear Least Squares: Find a vector c to minimize the Euclidian distance from y to Ac. Equivalently, minimize ||y–Ac|| 2 or (y–Ac) t (y–Ac) If we take all partials of this expression (with respect to c 0, c 1 ) and set these equal to zero, we get

Matrix Formulation of Linear Least Squares The equation A t Ac = A t y is also called the Normal Equation for the linear best fit. The equations one gets from A t Ac = A t y are exactly the same equations we had before: The solution c=[a b] for A t Ac = A t y gives the constants for the line ax+b.

Matlab Example % Example of how to find the linear least squares fit to noisy data x = 1:.1:6; % x values y =.1*x + 1; % linear y-values ynoisy = y +.1*randn(size(x)); % add noise to y values plot(x, ynoisy,'.') % Plot the noisy data hold on % So we can plot more data later % Find d11, d12, d21, d22, e1, e2 D=[sum(x.^2), sum(x); sum(x), length(x)]; e1 = x*ynoisy'; e2 = sum(ynoisy); % Solve for a and b det = D(1,1)*D(2,2) - D(1,2)*D(2,1); a = (D(2,2)*e1 - D(1,2)*e2)/det; b = (D(1,1)*e2 - D(2,1)*e1)/det; % Create a vector of y-values for the linear best fit fit_y = a.*x + b; plot(x, fit_y,'-') % Plot the best fit line

Matlab Example A t Ac = A t y % Example of how to find the linear least squares fit to noisy data x = 1:.1:6; % x values y =.1*x + 1; % linear y-values ynoisy = y +.1*randn(size(x)); % add noise to y values plot(x, ynoisy,'.') % Plot the noisy data hold on % So we can more data later % Create matrix A A = zeros(length(x), 2); A(:,1) = x; A(:,2) = ones(length(x),1); D = A'*A; e = A'*ynoisy'; % Solve for constants a and b c= D \ e; fit_y = c(1).*x + c(2); plot(x, fit_y,'O')