Section 8 Numerical Analysis CSCI E-24 José Luis Ramírez Herrán October 20, 2015.

Slides:



Advertisements
Similar presentations
Ch 7.7: Fundamental Matrices
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
1 Functions and Applications
Scientific Computing Chapter 3 - Linear Least squares
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Chapter 5 Orthogonality
Curve Fitting and Interpolation: Lecture (IV)
Curve-Fitting Regression
Lecture 12 Projection and Least Square Approximation Shang-Hua Teng.
Lecture 12 Least Square Approximation Shang-Hua Teng.
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Chapter 10 Real Inner Products and Least-Square (cont.)
Part 1 Set 2: Straight Lines
Scientific Computing QR Factorization Part 1 – Orthogonal Matrices, Reflections and Rotations.
Linear Functions and Models Lesson 2.1. Problems with Data Real data recorded Experiment results Periodic transactions Problems Data not always recorded.
Lecture 3: Bivariate Data & Linear Regression 1.Introduction 2.Bivariate Data 3.Linear Analysis of Data a)Freehand Linear Fit b)Least Squares Fit c)Interpolation/Extrapolation.
Lecture 10: Inner Products Norms and angles Projection Sections 2.10.(1-4), Sections 2.2.3, 2.3.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
Systems of Linear Equations and Row Echelon Form.
Copyright © Cengage Learning. All rights reserved. 10 Analytic Geometry in Three Dimensions.
Least-Squares Regression
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Chapter 10 Review: Matrix Algebra
Systems and Matrices (Chapter5)
Math 201 for Management Students
MATH 250 Linear Equations and Matrices
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
MAT 4725 Numerical Analysis Section 8.2 Orthogonal Polynomials and Least Squares Approximations (Part II)
Copyright © Cengage Learning. All rights reserved. 6 Inverse Functions.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
8 th Grade Math Common Core Standards. The Number System 8.NS Know that there are numbers that are not rational, and approximate them by rational numbers.
Biostatistics Lecture 17 6/15 & 6/16/2015. Chapter 17 – Correlation & Regression Correlation (Pearson’s correlation coefficient) Linear Regression Multiple.
1 Chapter 5 – Orthogonality and Least Squares Outline 5.1 Orthonormal Bases and Orthogonal Projections 5.2 Gram-Schmidt Process and QR Factorization 5.3.
Orthogonality and Least Squares
Curve-Fitting Regression
Slide 9- 1 Copyright © 2006 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Chapter 7 Inner Product Spaces 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
GG 313 Geological Data Analysis Lecture 13 Solution of Simultaneous Equations October 4, 2005.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley A set of equations is called a system of equations. The solution.
1 Section 5.3 Linear Systems of Equations. 2 THREE EQUATIONS WITH THREE VARIABLES Consider the linear system of three equations below with three unknowns.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Curve Fitting Pertemuan 10 Matakuliah: S0262-Analisis Numerik Tahun: 2010.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Chapter 15 General Least Squares and Non- Linear.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Chapter 12: Data Analysis by linear least squares Overview: Formulate problem as an over-determined linear system of equations Solve the linear system.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Lecture 11 Inner Product Spaces Last Time Change of Basis (Cont.) Length and Dot Product in R n Inner Product Spaces Elementary Linear Algebra R. Larsen.
Lecture 16: Image alignment
Chapter 7 Inner Product Spaces
The simple linear regression model and parameter estimation
1 Functions and Applications
Chapter 10: Solving Linear Systems of Equations
Orthogonality and Least Squares
Discrete Least Squares Approximation
Elementary Linear Algebra Anton & Rorres, 9th Edition
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Orthogonality and Least Squares
Chapter 5 Inner Product Spaces
Presentation transcript:

Section 8 Numerical Analysis CSCI E-24 José Luis Ramírez Herrán October 20, 2015

Section 8 Outline 1.Midterm Exam: we will go over the exam later tonight 2.Assignment 4: We haven't seen the material from Assignment 4, though many have already started on it 3.Background Assignment 4 4.Related problems to Assignment 4

Homework 4 – Total: 69 points Due 5:00 PM EST November 3rd, Exercise 4.1.8: Find the best line and RMSE (6 points) 2.Exercise 4.2.2: Fit data to periodic models (6 points) 3.Exercise 4.3.2: QR with Gram-Schmidt (6 points) 4.Exercise 4.3.4: QR with Householder reflectors (6 points) 5.Exercise 4.3.6: Least Squares with QR (6 points) 6.Plotting Linear Fit (8 points) 7.Gram-Schmidt (8 points) 8.Graduate Work - Orthogonal Polynomials (8 points) 9.Regression Equation (10 points) 10.Economizing Power Series (5 points)

Calendar

Background homework 4 – Least Squares

Least Squares – Part 1

4.1 LEAST SQUARES AND THE NORMAL EQUATIONS The need for least squares methods comes from two different directions, one each from our studies of before the Midterm, we learned how to find the solution of Ax = b when a solution exists. In the next 2 lectures, we find out what to do when there is no solution. When the equations are inconsistent, which is likely if the number of equations exceeds the number of unknowns, the answer is to find the next best thing: the least squares approximation.

4.1.1 Inconsistent systems of equations

What is the meaning of a system with no solutions? Perhaps the coefficients are slightly inaccurate. In many cases, the number of equations is greater than the number of unknown variables, making it unlikely that a solution can satisfy all the equations. In fact, m equations in n unknowns typically have no solution when m > n. Even though Gaussian elimination will not give us a solution to an inconsistent system Ax = b, we should not completely give up. An alternative in this situation is to find a vector x that comes the closest to being a solution.

to find a vector x that comes the closest to being a solution. If we choose this "closeness" to mean close in Euclidean distance, there is a straight- forward algorithm for finding the closest x. This special x will be called the least squares solution.

We can get a better picture of the failure of system (4.1) to have a solution by writing it in a different way. The matrix form of the system is Ax = b, or The alternative view of matrix/vector multiplication is to write the equivalent equation

Vector Equation In fact, any m x n system Ax = b can be viewed as a vector equation

Figure 4.1 (b) shows a direction for us to go when a solution does not exist.

there is no pair x1, x2 that solves (4.1 ), but there is a point in the plane Ax of all possible candidates that is closest to b.

Finding a formula for, the least squares “solution”

perpendicularity To work with perpendicularity, recall that two vectors are at right angles to one another if their dot product is zero. For two m-dimensional column vectors u and v, we can write the dot product solely in terms of matrix multiplication by

The Fact

Orthogonality Least squares is based on orthogonality. The shortest distance from a point to a plane is carried by a line segment orthogonal to the plane. The normal equations are a computational way to locate the line segment, which represents the least squares error.

Means?

system of equations that defines the least squares solution which is is known as the normal equations.

The Normal Equations

EXAMPLE 4.1 Find the least squares solution x that minimizes the Euclidean length of the residual r = b - Ax. Use the normal equations to find the least squares solution of the inconsistent system (4.1).

The components of the normal equations are The normal equations

an now be solved by Gaussian elimination. The tableau form is Which can be solved to get Substituting the least squares solution into the original problem yields To measure our success at fitting the data, we calculate the residual of the least squares solution

If the residual is the zero vector, then we have solved the original system Ax = b exactly. If not, the Euclidean length of the residual vector is a backward error measure of how far is from being a solution.

Residual

Example

Assignment 4 – section 4.1.1

Exercise 4.1.1

Review Least Squares – Part 2 fitting Models to Data

4.1.2 Fitting models to data Let (t 1, y 1 ),..., (t m, y m ) be a set of points in the plane, which we will often refer to as the "data points." Given a fixed class of models, such as all lines y = c 1 + c 2 t, we can seek to locate the specific instance of the model that best fits the data points in the 2-norm. The core of the least squares idea consists of measuring the residual of the fit by the squared errors of the model at the data points and finding the model parameters that minimize this quantity.

Least squares fitting of a line to data. The best line is the one for which the squared error is as small as possible among all lines

Find the line that best fits the three data points One each of the data points lies above, on, and below the best line.

The model is y = c 1 + c 2 t, and the goal is to find the best c 1 and c 2 Substitution of the data points into the model yields

The previous example suggests a three-step program for solving least squares data-fitting problems

EXAMPLE – part 1 Find the best line and best parabola for the four data points (-1, 1), (0, 0), (1, 0), (2, -2). three steps: Solving for the coefficients c1 and c2 results in the best line y = c1 + c2t = t.

EXAMPLE – part 2

EXAMPLE – part 3 Problem: best parabola for the four data points (-1, 1), (0, 0), (1, 0), (2, -2). three steps:

EXAMPLE – part 4

MatLab

Example 4.4 shows that least squares modeling need not be restricted to finding best lines. By expanding the definition of the model, we can fit coefficients for any model as long as the coefficients enter the model in a linear way

Least Squares – Part 3 Conditioning

Conditioning of least squares We have seen that the least squares problem reduces to solving the normal equations. How accurately can the least squares solution x be determined? This is a question about the forward error of the normal equations. We carry out a double precision numerical experiment to test this question, by solving the normal equations in a case where the correct answer is known.

MatLab to solve the normal equations for a Vander Monde matrix,

4.1 Problems

Continuation Least Squares – Part 4 Periodic Data

Periodic data calls for periodic models. Outside air temperatures, for example, obey cycles on numerous timescales, including daily and yearly cycles governed by the rotation of the earth and the revolution of the earth around the sun. As a first example, hourly temperature data are fit to sines and cosines.

The fact that temperature is roughly periodic with a period of 24 hours, at least in the absence of longer-term temperature movements. The model uses this information by fixing the period to be exactly one day, where we are using days for the t units. The variable t is listed in these units in the table. Substituting the data into the model results in the following over-determined system of linear equations:

Vs

Continuation Least Squares – Part 5 Data Linearization

Data Linearization Exponential growth of a population is implied when its rate of change is proportional to its size. Under perfect conditions, when the growth environment is unchanging and when the population is well below the carrying capacity of the environment, the model is a good representation. cannot be directly fit by least squares because c 2 does not appear linearly in the model equation. Once the data points are substituted into the model, the difficulty is clear: The set of equations to solve for the coefficients are nonlinear and cannot be expressed as a linear system Ax =b. Therefore, our derivation of the normal equations is irrelevant.

Continuation Least Squares – Part 6 QR Factorization

QR Factorization

EXAMPLE

Example

MatLab

Applications of the QR Factorization

Assignment 4

Homework 4 related problems

4.2.1 part 1

4.2.1 part2

4.3.1

4.3.3 part 1

4.3.3 part2

4.3.5 part 1