1cs542g-term1-2007 Notes  Assignment 1 is out (questions?)

Slides:



Advertisements
Similar presentations
Rules of Matrix Arithmetic
Advertisements

Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Linear Systems LU Factorization CSE 541 Roger Crawfis.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 18 Sit in your groups.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
Numerical Algorithms Matrix multiplication
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 3 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
Chapter 2, Linear Systems, Mainly LU Decomposition.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
Linear Algebraic Equations
1cs542g-term Notes  Simpler right-looking derivation (sorry):
1cs542g-term Notes  r 2 log r is technically not defined at r=0 but can be smoothly continued to =0 there  Question (not required in assignment):
1cs542g-term Notes  Note that r 2 log(r) is NaN at r=0: instead smoothly extend to be 0 at r=0  Schedule a make-up lecture?
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Wed. by 5, by Office Hours: Today & tomorrow, 11-1 –Ideas for Friday?
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 18 LU Decomposition and Matrix Inversion.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Square n-by-n Matrix.
Using LU Decomposition to Optimize the modconcen.m Routine Matt Tornowske April 1, 2002.
Review of Matrices Or A Fast Introduction.
MA/CS 375 Fall MA/CS 375 Fall 2003 Lecture 19.
By: David McQuilling and Jesus Caban Numerical Linear Algebra.
EE616 Dr. Janusz Starzyk Computer Aided Analysis of Electronic Circuits Innovations in numerical techniques had profound import on CAD: –Sparse matrix.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 21.
Lecture 8 Matrix Inverse and LU Decomposition
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Lesson 3 CSPP58001.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Matrices and Systems of Equations
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Today. by 5, by –Ideas for Friday? Linear Systems Basics Matlab and.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Linear Systems Dinesh A.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Parallel Programming & Cluster Computing Linear Algebra Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
ALGEBRAIC EIGEN VALUE PROBLEMS
Parallel Direct Methods for Sparse Linear Systems
Chapter 2, Linear Systems, Mainly LU Decomposition
Simultaneous Linear Equations
Linear Equations.
Linear Systems, Mainly LU Decomposition
Linear Algebra Lecture 15.
RECORD. RECORD Gaussian Elimination: derived system back-substitution.
RECORD. RECORD COLLABORATE: Discuss: Is the statement below correct? Try a 2x2 example.
Lecture 8 Matrix Inverse and LU Decomposition
Linear Systems of Equations: solution and applications
Presentation transcript:

1cs542g-term Notes  Assignment 1 is out (questions?)

2cs542g-term Linear Algebra  Last class: we reduced the problem of “optimally” interpolating scattered data to solving a system of linear equations  Typical: often almost all of the computational work in a scientific computing code is linear algebra operations

3cs542g-term Basic Definitions  Matrix/vector notation  Dot product, outer product  Vector norms  Matrix norms

4cs542g-term BLAS  Many common matrix/vector operations have been standardized into an API called the BLAS (Basic Linear Algebra Subroutines) Level 1: vector operations copy, scale, dot, add, norms, … Level 2: matrix-vector operations multiply, triangular solve, … Level 3: matrix-matrix operations multiply, triangular solve, …  FORTRAN bias, but callable from other langs  Goals: As fast as possible, but still safe/accurate 

5cs542g-term Speed in BLAS  In each level: multithreading, prefetching, vectorization, loop unrolling, etc.  In level 2, especially in level 3: blocking Operate on sub-blocks of the matrix that fit the memory architecture well  General goal: if it’s easy to phrase an operation in terms of BLAS, get speed+safety for free The higher the level better

6cs542g-term LAPACK  The BLAS only solves triangular systems Forward or backward substitution  LAPACK is a higher level API for matrix operations: Solving linear systems Solving linear least squares problems Solving eigenvalue problems  Built on the BLAS, with blocking in mind to keep high performance  Biggest advantage: safety Designed to handle difficult problems gracefully 

7cs542g-term Specializations  When solving a linear system, first question to ask: what sort of system?  Many properties to consider: Single precision or double? Real or complex? Invertible or (nearly) singular? Symmetric/Hermitian? Definite or Indefinite? Dense or sparse or specially structured? Multiple right-hand sides?  LAPACK/BLAS take advantage of many of these (sparse matrices the big exception…)

8cs542g-term Accuracy  Before jumping into algorithms, how accurate can we hope to be in solving a linear system?  Key idea: backward error analysis  Assume calculated answer is the exact solution of a perturbed problem.  The condition number of a problem: how much errors in data get amplified in solution

9cs542g-term Condition Number  Sometimes we can estimate the condition number of a matrix a priori  Special case: for a symmetric matrix, 2-norm condition number is ratio of extreme eigenvalues  LAPACK also provides cheap estimates Try to construct a vector ||x|| that comes close to maximizing ||A -1 x||

10cs542g-term Gaussian Elimination  Let’s start with the simplest unspecialized algorithm: Gaussian Elimination  Assume the matrix is invertible, but otherwise nothing special known about it  GE simply is row-reduction to upper triangular form, followed by backwards substitution Permuting rows if we run into a zero

11cs542g-term LU Factorization  Each step of row reduction is multiplication by an elementary matrix  Gathering these together, we find GE is essentially a matrix factorization: A=LU where L is lower triangular (and unit diagonal), U is upper triangular  Solving Ax=b by GE is then Ly=b Ux=y

12cs542g-term Block Approach to LU  Rather than get bogged down in details of GE (hard to see forest for trees)  Partition the equation A=LU  Gives natural formulas for algorithms  Extends to block algorithms

13cs542g-term Cholesky Factorization  If A is symmetric positive definite, can cut work in half: A=LL T L is lower triangular  If A is symmetric but indefinite, possibly still have the Modified Cholesky factorization: A=LDL T L is unit lower triangular D is diagonal