1cs542g-term1-2007 Notes  Simpler right-looking derivation (sorry):

Slides:



Advertisements
Similar presentations
6.4 Best Approximation; Least Squares
Advertisements

3D Geometry for Computer Graphics
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
4.4 Application--OLS Estimation. Background When we do a study of data and are looking at the relationship between 2 variables, and have reason to believe.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
1.5 Elementary Matrices and a Method for Finding
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Scientific Computing Chapter 3 - Linear Least squares
Solution of linear system of equations
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term Notes  In assignment 1, problem 2: smoothness = number of times differentiable.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
1cs542g-term Notes  Assignment 1 is out (questions?)
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
Chapter 5 Orthogonality
1cs542g-term Notes  r 2 log r is technically not defined at r=0 but can be smoothly continued to =0 there  Question (not required in assignment):
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
1cs542g-term Notes  Note that r 2 log(r) is NaN at r=0: instead smoothly extend to be 0 at r=0  Schedule a make-up lecture?
QR-RLS Algorithm Cy Shimabukuro EE 491D
Now Playing: My Mathematical Mind Spoon From Gimme Fiction Released May 10, 2005.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Lecture 12 Least Square Approximation Shang-Hua Teng.
Lecture 13 Operations in Graphics and Geometric Modeling I: Projection, Rotation, and Reflection Shang-Hua Teng.
Math for CSTutorial 41 Contents: 1.Least squares solution for overcomplete linear systems. 2.… via normal equations 3.… via A = QR factorization 4.… via.
Ordinary least squares regression (OLS)
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
Orthogonality and Least Squares
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Scientific Computing QR Factorization Part 1 – Orthogonal Matrices, Reflections and Rotations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
SYSTEMS OF LINEAR EQUATIONS
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
CHAPTER SIX Eigenvalues
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
6.837 Linear Algebra Review Patrick Nichols Thursday, September 18, 2003.
1 Chapter 6 – Determinant Outline 6.1 Introduction to Determinants 6.2 Properties of the Determinant 6.3 Geometrical Interpretations of the Determinant;
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Gram-Schmidt Orthogonalization
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
January 22 Review questions. Math 307 Spring 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone.
Chapter 1 Section 1.1 Introduction to Matrices and systems of Linear Equations.
1 Chapter 5 – Orthogonality and Least Squares Outline 5.1 Orthonormal Bases and Orthogonal Projections 5.2 Gram-Schmidt Process and QR Factorization 5.3.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Scientific Computing Singular Value Decomposition SVD.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Lesson 3 CSPP58001.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Unsupervised Learning II Feature Extraction
Chapter 12: Data Analysis by linear least squares Overview: Formulate problem as an over-determined linear system of equations Solve the linear system.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Section 8 Numerical Analysis CSCI E-24 José Luis Ramírez Herrán October 20, 2015.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Orthogonality and Least Squares
CSE 541 – Numerical Methods
Numerical Analysis Lecture10.
Presentation transcript:

1cs542g-term Notes  Simpler right-looking derivation (sorry):

2cs542g-term From Last Time  Solving linear least squares:  Normal equations: Potentially unreliable if A is “ill-conditioned” (columns of A are close to being linearly dependent)  Can we solve the problem more reliably?

3cs542g-term The Best A  Start by asking what is the best A possible?  A T A=I (the identity matrix) I.e. the columns of A are orthonormal  Then the solution is x=A T b, no system to solve (and relative error behaves well)  What if A is not orthonormal?  Change basis to make it so…

4cs542g-term Orthonormalizing A  Goal: find R so that A=QR Q is orthonormal R is easy to solve with  Classic answer: apply Gram-Schmidt to columns of A (R encodes the sequence of elementary matrix operations used in GS)

5cs542g-term Gram-Schmidt  Classic formula:  In-depth numerical analysis shows error (loss of orthogonality) can be bad  Use Modified Gram-Schmidt instead: q i =A *i for j=1:i-1 q i =q i -Q *j (Q *j T q i )

6cs542g-term What is R?  Since A=QR, we find R=Q T A  Upper triangular, and containing exactly the dot-products from Gram-Schmidt  Triangular matrices are easy to solve with: good!  In fact, this gives an alternative to solving regular linear systems: A=QR instead of A=LU Potentially more accurate, but typically slower

7cs542g-term Another look at R  Since A=QR, we have A T A=R T Q T QR=R T R  That is, R T is the Cholesky factor of A T A  But Cholesky factorization is not a good way to compute it!

8cs542g-term Yet another look at R  There is an even better way to compute R (than Modified Gram-Schmidt): orthogonal transformations  Idea: instead of upper-triangular elementary matrices turning A into Q, use orthogonal elementary matrices to turn A into R  Two main choices: Givens rotations: rotate in selected two dimensions Householder reflections: reflect across a plane

9cs542g-term Givens rotations  For c 2 +s 2 =1:  Say we want QA to be zero at (i,j):

10cs542g-term Householder reflections  For a unit vector v (normal to plane):  Choose v to zero out entries below the diagonal in a column  Note: can store Householder vectors and R in-place of A Don’t directly form Q, just multiply by Householder factors when computing Q T b

11cs542g-term Full and Economy QR  Even if A is rectangular, Givens and Householder implicitly give big square Q (and rectangular R): called the “full QR” But you don’t have to form the big Q If you do: the extra columns are an orthonormal basis of the null-space of A T  Modified Gram-Schmidt computes only the first k columns of Q (rectangular Q) and gives only a square R: called the “economy QR”

12cs542g-term Weighted Least Squares  What if we introduce nonnegative weights (some data points count more than others)  Weighted normal equations:  Can also solve with

13cs542g-term Moving Least Squares (MLS)  Idea: estimate f(x) by fitting a low degree polynomial to data points, but weight nearby points more than others  Use a weighting kernel W(r) Should be big at r=0, decay to zero further away  At each point x, we have a (small) weighted linear least squares problem:

14cs542g-term Constant Fit MLS  Instructive to work out case of zero degree polynomials (constants)  Sometimes called Franke interpolation  Illustrates effect of weighting function How do we force it to interpolate? What if we want local calculation?

15cs542g-term MLS Basis  Going back to general least squares problem:  Solution x depends linearly on data b  Add or scale data, solutions add and scale  Same is true for MLS  Can solve MLS for (0,…,0,1,0,…,0) etc. to get a set of basis functions