Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

(0,1)-Matrices If the row-sums of a matrix A are r 1, …,r k, then we shall call the vector r:=(r 1,r 2, …,r k ) the row-sum of A, and similarly for the.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Some useful Contraction Mappings  Results for a particular choice of norms.
Linear Systems of Equations
1.5 Elementary Matrices and a Method for Finding
Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A.
Matrix Theory Background
2.3 Matrix Inverses. Numerical equivalent How would we solve for x in: ax = b ? –a -1 a x = a -1 b –x=a -1 b since a -1 a = 1 and 1x = x We use the same.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Function Optimization Newton’s Method. Conjugate Gradients
Tutorial 12 Unconstrained optimization Conjugate gradients.
Message Passing Algorithms for Optimization
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Economics 2301 Matrices Lecture 13.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.
CSE 245: Computer Aided Circuit Simulation and Verification
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
5.1 Orthogonality.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Computational Optimization
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
A rectangular array of numbers (we will concentrate on real numbers). A nxm matrix has ‘n’ rows and ‘m’ columns What is a matrix? First column First row.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Appendix A. Mathematical Background EE692 Parallel and Distribution Computation.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Three different ways There are three different ways to show that ρ(A) is a simple eigenvalue of an irreducible nonnegative matrix A:
Chapter 2 Nonnegative Matrices. 2-1 Introduction.
2.4 Irreducible Matrices. Reducible is reducible if there is a permutation P such that where A 11 and A 22 are square matrices each of size at least one;
OR Backgrounds-Convexity  Def: line segment joining two points is the collection of points.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Numerical Analysis – Eigenvalue and Eigenvector Hanyang University Jong-Il Park.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
2.2 The Inverse of a Matrix. Example: REVIEW Invertible (Nonsingular)
Linear Algebra Chapter 2 Matrices.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 3 Iterative Method for Nonlinear problems EE692 Parallel and Distribution.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Iterative Solution Methods
CS479/679 Pattern Recognition Dr. George Bebis
Mathematical Structures for Computer Science Chapter 6
Systems of First Order Linear Equations
CSE 245: Computer Aided Circuit Simulation and Verification
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chap 3. The simplex method
CSE 245: Computer Aided Circuit Simulation and Verification
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Sec 3.5 Inverses of Matrices
3.IV. Change of Basis 3.IV.1. Changing Representations of Vectors
Totally Asynchronous Iterative Algorithms
Linear Algebra Lecture 16.
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Chapter 2 Determinants.
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Chapter 2. Simplex method
Presentation transcript:

Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2 1 1 The unit ball of w.r.t. x1x1 x2x2 w -w w2w2 -w 2 w1w1 -w 1 The unit ball of w.r.t.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.2  The induced matrix norm  Proposition 6.2 (c) (b) If M ≥ 0, then (d) Let M ≥ 0. Then, for any λ > 0, iff (e) (f) If (a) M ≥ 0 iff M maps nonnegative vectors into nonnegative vectors. w -w Mw -Mw x1x1 x2x2 λwλw -λw-λw M ≥ 0

Network Systems Lab. Korea Advanced Institute of Science and Technology No.3  n x n matrix M to Graph G = (N,A)  N = {1, ……, n}  A = {(i,j) | i≠j & m ij ≠ 0}  Definition 6.1 An n x n matrix M (n≥2) is called irreducible, if for every i,j N, a positive path in the graph G. e.g.) 12

Network Systems Lab. Korea Advanced Institute of Science and Technology No.4  Proposition 6.5 (Brouwer Fixed Point Theorem) Consider the unit simplex e.g.) If f: S  S is a continuous fct., then some w S such that f(w)=w. 1 1 S Unit Simplex

Network Systems Lab. Korea Advanced Institute of Science and Technology No.5  Proposition 6.6 (Perron-Frobenious Theorem) Let M ≥ 0. (a)If M is irreducible, then ρ(M) is an eigenvalue of M and some ω > 0 such that Mω = ρ(M)ω. Furthermore, such a ω is unique within a scalar multiple, i.e., if some v satisfies Mv = ρ(M)v, then v=αω. Finally,. (b)ρ(M) is an eigenvalue of M & there exists some ω≥0, ω≠0 such that Mω = ρ(M)ω. (c) For every ε > 0, there exists ω > 0 such that Proof) by yourself

Network Systems Lab. Korea Advanced Institute of Science and Technology No.6 e.g.) 12

Network Systems Lab. Korea Advanced Institute of Science and Technology No.7 Corollaries  Corollary 6.1  Let. The followings are equivalent :  Corollary 6.2  Given any square matrix M, there exists some such that iff  Corollary 6.3  Given any square matrix M,

Network Systems Lab. Korea Advanced Institute of Science and Technology No.8 Convergence analysis using maximum norms  Def 6.2  A square matrix A with entries is (row) diagonally dominant if  Prop 6.7  If A is row diagonally dominant, then the Jacobi method for solving converges. proof) For, Therefore, for each i Therefore, Q.E.D

Network Systems Lab. Korea Advanced Institute of Science and Technology No.9  Prop. 6.8 Consider on nxn matrix associated to an iteration x: = Mx + b. Let be the corresponding Gauss-Seidel iteration matrix, that is, the iteration matrix obtained if the components in the original iteration are updated one at a time. Suppose that. Then. Proof) Assume that Let us fix some such that By prop 6.6(c) & prop 6.2(b) such that Therefore, (by Prop. 6.2 (d)) Equivalently for all i, - (*) Consider now some such that and let (Note that is not necessarily nonnegative)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.10 We will prove by induction on i that Assuming that for Therefore, for every satisfying This implies that Q.E.D. Prop. 6.8 implies that if

Network Systems Lab. Korea Advanced Institute of Science and Technology No.11  Prop. 6.9 (Stein-Rosenberg Theorems)  Consider where for and (This implies that the Jacobi iteration matrix is given by and for. That is ) (a) If, then restatement of Prop. 6.8 (b) If, then Proof) by yourself.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.12  Prop. 6.8 implies that for nonnegative iteration matrices, if a Jacobi algorithm converges, then the corresponding Gauss-Seidel iteration also converges, and its convergence rate is no worse than that of the Jacobi algorithm.  Notice that the proofs of Prop. 6.8 and Prop. 6.9 remain valid when different updating orders of the components are considered. Nonnegative matrices possess some intrinsic robustness w.r.t. the order of updates! Key to asynchronous algorithms

Network Systems Lab. Korea Advanced Institute of Science and Technology No.13 Convergence Analysis Using Quadratic Cost Function  Consider where is a symmetric positive definite matrix.  Solve ( has a unique solution since is invertible) Find satisfying Define a cost fct. F is a strictly convex fct. ( is positive definite and by Prop. A.40 (d) ) minimizes iff, i.e.,

Network Systems Lab. Korea Advanced Institute of Science and Technology No.14  Assume that is a symmetric positive definite matrix.  Def. A.11 A nxn square matrix is called positive definite if is real and for all,. It is called nonnegative definite if it is real and for all.  Prop. A.26 (a) For any real matrix, the matrix is symmetric and nonnegative definite. It is positive definite if and only if is nonsingular. (b) A square symmetric real matrix is nonnegative definite (positive definite) iff all of its eigenvalues are nonnegative (positive). (c) The inverse of a symmetric positive definite matrix is symmetric and positive definite.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.15 The meaning of Gauss-Seidel method (&SOR) in term of cost fct. F. can be viewed as a coordinate descent method minimizing

Network Systems Lab. Korea Advanced Institute of Science and Technology No.16  Prop Let A be symmetric and positive definite, and let x* be the solution of Ax=b. (a) If, then the sequence {x(t)} generated by the SOR algorithm converges to x*. (b) If, then for every choice of x(0) different than x*, the sequence generated by the SOR algorithm does not converge to x*.  Prop If A is symmetric and positive definite and if is sufficiently small, then the JOR and Richardson’s algorithms converge to the solution of Ax=b.  Both are a special case of Prop. 2.1 and Prop. 2.2 of Section 3.2.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.17 Conjugate Gradient Method  To accelerate the speed of convergence of the classical iterative methods  Consider - Assume that A is nxn symmetric and positive definite - If A is not, consider the equivalent problem. Then, is symmetric and positive definite (by Prop. A.26 (a) )  For convenience, assume, i.e.,

Network Systems Lab. Korea Advanced Institute of Science and Technology No.18  The cost function  An iteration of the method has the general form is a direction of update is a scalar step size defined by the line minimization  Let

Network Systems Lab. Korea Advanced Institute of Science and Technology No.19  Steepest Descent Method  Conjugate Gradient method  Prop  For the conjugate gradient method, the following hold: The algorithm terminates after at most n steps; that is, there exists some t n such that g(t)=0 and x(t)=0.

Network Systems Lab. Korea Advanced Institute of Science and Technology No.20  Geometric interpretation  {s(t)} is mutually A-conjugate, that is, s(t)’As(r) = 0 if t r If A=I, s(t)’s(r) =0, if t r A=I A: Positive definite & symmetric X(0) Steepest Descent Conjugate gradient