The topic: Least squares method Numeriska beräkningar i Naturvetenskap och Teknik.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
Today’s topic: Some Celestial Mechanics F Numeriska beräkningar i Naturvetenskap och Teknik.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
How to find intersection of lines? Snehal Poojary.
Statistical Techniques I EXST7005 Simple Linear Regression.
Numeriska beräkningar i Naturvetenskap och Teknik Today’s topic: Some Celestial Mechanics F.
Numeriska beräkningar i Naturvetenskap och Teknik 1.Solving equations.
Numeriska beräkningar i Naturvetenskap och Teknik 1. Numerical differentiation and quadrature Discrete differentiation and integration Trapezoidal and.
Numeriska beräkningar i Naturvetenskap och Teknik Today’s topic: Approximations Least square method Interpolations Fit of polynomials Splines.
The General Linear Model. The Simple Linear Model Linear Regression.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Chapter 5 Orthogonality
Lecture 12 Projection and Least Square Approximation Shang-Hua Teng.
Lecture 3 Hyper-planes, Matrices, and Linear Systems
Lecture 7 Hyper-planes, Matrices, and Linear Systems Shang-Hua Teng.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Orthogonality and Least Squares
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Quantum One: Lecture 8. Continuously Indexed Basis Sets.
Section 8.3 – Systems of Linear Equations - Determinants Using Determinants to Solve Systems of Equations A determinant is a value that is obtained from.
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
Section 9.5: Equations of Lines and Planes
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 2. Linear systems.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Gu Yuxian Wang Weinan Beijing National Day School.
AN ORTHOGONAL PROJECTION
Orthogonality and Least Squares
Curve-Fitting Regression
Chapter 3 Vectors in n-space Norm, Dot Product, and Distance in n-space Orthogonality.
6 6.1 © 2016 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Section 5.1 Length and Dot Product in ℝ n. Let v = ‹v 1­­, v 2, v 3,..., v n › and w = ‹w 1­­, w 2, w 3,..., w n › be vectors in ℝ n. The dot product.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
Determining 3D Structure and Motion of Man-made Objects from Corners.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
1 MAC 2103 Module 11 lnner Product Spaces II. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Construct an orthonormal.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
6 6.1 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 13 th October 2014 DR TANIA STATHAKI READER (ASSOCIATE.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Lecture 16: Image alignment
The Maximum Likelihood Method
Orthogonality and Least Squares
Nonlinear regression.
Linear Algebra Lecture 39.
6.5 Taylor Series Linearization
5.2 Least-Squares Fit to a Straight Line
Affine Spaces Def: Suppose
Principal Component Analysis
Linear Vector Space and Matrix Mechanics
Orthogonality and Least Squares
Approximation of Functions
Approximation of Functions
Presentation transcript:

The topic: Least squares method Numeriska beräkningar i Naturvetenskap och Teknik

An exemple Model Why do the measured values deviate from the model if the measurement is correct? Numeriska beräkningar i Naturvetenskap och Teknik

How determine the ‘best’ straight line? Model Numeriska beräkningar i Naturvetenskap och Teknik

Distance between line and measurements points... Numeriska beräkningar i Naturvetenskap och Teknik

How to define the distance between the line and the measurement points? Largest deviation at minimum Approximation in maximum norm Sum of deviations squared as small as possible Approximation in Euclidian norm Easier to calculate! Norm Numeriska beräkningar i Naturvetenskap och Teknik

Matrix formulation: An example with More equations than unknowns! Numeriska beräkningar i Naturvetenskap och Teknik

Matrix formulation: An example Numeriska beräkningar i Naturvetenskap och Teknik

Matrix formulation: An example Numeriska beräkningar i Naturvetenskap och Teknik

Matrix formulation: An example Numeriska beräkningar i Naturvetenskap och Teknik

General Statement of the Problem: Depending on the model, the measurement data can of course be described by other expressions than the straight line. In general terms one seeks a function f* that approximates f’s given values as well as possible in euclidian norm. Specifically, above we looked for a solution expressed as but we could as well have looked for a solution given by another function (possibly then for different data) etc... Numeriska beräkningar i Naturvetenskap och Teknik

Generally one can thus write: f(x) is in other words a linear combination of given functions Where the coefficients are sought One can in accordande with a vector space look at it so that Spans a function space (a space of this kind which fulfills certain conditions is called a Hilbert space, cmp. quant. mech) Numeriska beräkningar i Naturvetenskap och Teknik

In the case of the straight line we have In a geometrical comparision these two functions, which can be seen as two vectors in the function space, span a plane U: ”vector” 0 ”vector” 1 Approximating function sought function The smallest distance from the plane is given by a normal. The Smallest deviation between f* och f is for f*-f orthogonal to the plane U! Numeriska beräkningar i Naturvetenskap och Teknik

Normal equations Since we are interested in fitting m measured values we leave the picture of the continuous function space and view f(x) as an m-dimensional vector with values: That should be expressed byand For the straight line: Numeriska beräkningar i Naturvetenskap och Teknik

The orthogonality condition now gives the equations: where the equations for the normal: Which gives Numeriska beräkningar i Naturvetenskap och Teknik

The equations for the normal : Numeriska beräkningar i Naturvetenskap och Teknik

Back to the exemple: Model: Data: Numeriska beräkningar i Naturvetenskap och Teknik

Conclusion: the minimum of is orthogonal to the basis vectors Assuming the modelGiven data is obtained when The coefficienterna c 1, c 2, c 3, c n are determined from Numeriska beräkningar i Naturvetenskap och Teknik

The equations or Where the colomuns in A are: Numeriska beräkningar i Naturvetenskap och Teknik

Note 1: The func’s Have to be linearly independent (cmp vectors in a vector space) Note 2: Assume our problem would have been (x koord -996) cmp to Numeriska beräkningar i Naturvetenskap och Teknik