 The Sinkhorn-Knopp Algorithm and Fixed Point Problem  Solutions for 2 × 2 and special n × n cases  Circulant matrices for 3 × 3 case  Ongoing work.

Slides:



Advertisements
Similar presentations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Advertisements

Applied Informatics Štefan BEREŽNÝ
Latent Semantic Analysis
Chapter 4 Systems of Linear Equations; Matrices
Chapter 4 Systems of Linear Equations; Matrices
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Autar Kaw Humberto Isaza Transforming Numerical Methods Education for STEM Undergraduates.
Eigenvalues and Eigenvectors
Determinants (10/18/04) We learned previously the determinant of the 2 by 2 matrix A = is the number a d – b c. We need now to learn how to compute the.
Psychology 202b Advanced Psychological Statistics, II January 25, 2011.
Ch 7.8: Repeated Eigenvalues
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 2 Matrices Definition of a matrix.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Matrices and Determinants
MATRICES MATRIX OPERATIONS. About Matrices  A matrix is a rectangular arrangement of numbers in rows and columns. Rows run horizontally and columns run.
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
Eigenvalues and Eigenvectors
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
January 22 Review questions. Math 307 Spring 2003 Hentzel Time: 1:10-2:00 MWF Room: 1324 Howe Hall Instructor: Irvin Roy Hentzel Office 432 Carver Phone.
8.1 Matrices & Systems of Equations
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
4.4 Identify and Inverse Matrices Algebra 2. Learning Target I can find and use inverse matrix.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
4.5 Inverse of a Square Matrix
4.5 Matrices, Determinants, Inverseres -Identity matrices -Inverse matrix (intro) -An application -Finding inverse matrices (by hand) -Finding inverse.
Learning Objectives for Section 4.5 Inverse of a Square Matrix
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
By Josh Zimmer Department of Mathematics and Computer Science The set ℤ p = {0,1,...,p-1} forms a finite field. There are p ⁴ possible 2×2 matrices in.
2.5 – Determinants and Multiplicative Inverses of Matrices.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
Slide Copyright © 2009 Pearson Education, Inc. 7.4 Solving Systems of Equations by Using Matrices.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
A rectangular array of numeric or algebraic quantities subject to mathematical operations. The regular formation of elements into columns and rows.
10.4 Matrix Algebra. 1. Matrix Notation A matrix is an array of numbers. Definition Definition: The Dimension of a matrix is m x n “m by n” where m =
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
PIVOTING The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex.
Matrices and Vector Concepts
Review of Eigenvectors and Eigenvalues
12-1 Organizing Data Using Matrices
Matrices and vector spaces
Spring Dr. Jehad Al Dallal
Matrix Operations SpringSemester 2017.
Chapter 10: Solving Linear Systems of Equations
Matrix Algebra.
MATRICES MATRIX OPERATIONS.
Matrix Definitions It is assumed you are already familiar with the terms matrix, matrix transpose, vector, row vector, column vector, unit vector, zero.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
EIGENVECTORS AND EIGENVALUES
MATRICES MATRIX OPERATIONS.
Matrix Operations SpringSemester 2017.
Linear Algebra Lecture 28.
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

 The Sinkhorn-Knopp Algorithm and Fixed Point Problem  Solutions for 2 × 2 and special n × n cases  Circulant matrices for 3 × 3 case  Ongoing work

 In the 1960s Sinkhorn and Knopp developed an algorithm which transforms any positive matrix A into a doubly stochastic matrix by pre- and post-multiplication of diagonal matrices where is a solution to where (-1) is the entry-wise inverse.

Solutions to for

or any other multiple thereof. Solutions to for

 All solutions to result in matrices with row and column sums of 1.  Exactly one solution is guaranteed to result in a doubly stochastic matrix if A is all positive. It is the unique solution found by the Sinkhorn- Knopp Fixed Point Algorithm

 The general formula for the 2 × 2 case is simple.  The general formula for the 3 × 3 case is far more complicated. Its only real use thus far is to verify that there are at most 6 solutions, which we had already predicted by numerically finding solutions.  We have guesses for how many solutions there are in the general n × n case.

 For all non-zero diagonal matrices, every non- zero vector is a solution.  For constant matrices, the only solution is the vector of all 1’s.  Any nonzero multiple of a solution is also a solution—this is especially important.

If, and then.

 For.  The matrix formed by swapping any two rows of A has the same solutions.  The matrix formed by swapping two columns of A has the same solutions, but with corresponding elements swapped. That is, if columns i and j are swapped in A, then the solution is the original solution, but with elements i and j swapped.

 For 3 x 3 and larger matrices, the general case is too complicated.  We considered special cases:  Already mentioned diagonal and constant matrices  Upper and lower triangular  Patterned matrices, including circulant matrices

 We have shown that any eigenvector for any non-zero n × n circulant matrix is a solution:  This include the vector of all 1’s, the only solution that results in a doubly stochastic matrix.

 Observations:  since.  If, then.

 Eigenvectors: if, then

 There are other solutions as well. We consider the 3 × 3 case which has three other (non-e.vector) solutions:

 There are other solutions as well. We consider the 3 × 3 case which has three other (non-e.vector) solutions:

 The columns of those solutions form another circulant matrix, which we label A 1 :  The solutions for this matrix are similar to those for the original circulant matrix, A, which we now label as A 0.

 For the first element of the first solution vector is

 With similar results for the second and third elements, the first solution vector is

after factoring out the common term.

 With similar results for the other two solutions, we find all three solutions and form a new (and again circulant) matrix

 For the first element of the first solution vector is

 With similar results for the second and third elements, the first solution is

after factoring out the common term.

 With similar results for the other two solutions, we find all three solutions and form a new (yet again circulant) matrix which is, of course, the original matrix A.

 Solutions for n × n cases:  Circulant  Upper/lower triangular  Other patterned matrices  Numerical solutions for general case  Maximum number of solutions  How to characterize solutions: do doubly stochastic solutions minimize some sort of energy function for a given matrix, while the non-doubly stochastic solutions maximize the energy function?

Questions?