Vladimir Protasov (Moscow State University) Perron-Frobenius theory for matrix semigroups.

Slides:



Advertisements
Similar presentations
Rules of Matrix Arithmetic
Advertisements

Lecture 24 MAS 714 Hartmut Klauck
An Ω(n 1/3 ) Lower Bound for Bilinear Group Based Private Information Retrieval Alexander Razborov Sergey Yekhanin.
Weakening the Causal Faithfulness Assumption
Factoring Polynomials
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Noga Alon Institute for Advanced Study and Tel Aviv University
Lecture 17 Introduction to Eigenvalue Problems
SECTION 3.6 COMPLEX ZEROS; COMPLEX ZEROS; FUNDAMENTAL THEOREM OF ALGEBRA FUNDAMENTAL THEOREM OF ALGEBRA.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
2 2.3 © 2012 Pearson Education, Inc. Matrix Algebra CHARACTERIZATIONS OF INVERTIBLE MATRICES.
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
Multivariable Control Systems Ali Karimpour Assistant Professor Ferdowsi University of Mashhad.
From PET to SPLIT Yuri Kifer. PET: Polynomial Ergodic Theorem (Bergelson) preserving and weakly mixing is bounded measurable functions polynomials,integer.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
A Brief Introduction To The Theory of Computer Science and The PCP Theorem By Dana Moshkovitz Faculty of Mathematics and Computer Science The Weizmann.
Orthogonality and Least Squares
Mathematics of Cryptography Part I: Modular Arithmetic, Congruence,
1 © 2012 Pearson Education, Inc. Matrix Algebra THE INVERSE OF A MATRIX.
CS Subdivision I: The Univariate Setting Peter Schröder.
Subdivision Analysis via JSR We already know the z-transform formulation of schemes: To check if the scheme generates a continuous limit curve ( the scheme.
THE EXTENSION OF COLLISION AND AVALANCHE EFFECT TO k-ARY SEQUENCES Viktória Tóth Eötvös Loránd University, Budapest Department of Algebra and Number Theory,
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
1 of 12 COMMUTATORS, ROBUSTNESS, and STABILITY of SWITCHED LINEAR SYSTEMS SIAM Conference on Control & its Applications, Paris, July 2015 Daniel Liberzon.
Mathematics of Cryptography Part I: Modular Arithmetic, Congruence,
Vladimir Protasov (Moscow State University, Russia) Invariant polyhedra for families of linear operators.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Wei Wang Xi’an Jiaotong University Generalized Spectral Characterization of Graphs: Revisited Shanghai Conference on Algebraic Combinatorics (SCAC), Shanghai,
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
Lesson 4-1 Polynomial Functions.
Foundations of Discrete Mathematics Chapters 5 By Dr. Dalia M. Gil, Ph.D.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
The DYNAMICS & GEOMETRY of MULTIRESOLUTION METHODS Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore.
DISCRETE COMPUTATIONAL STRUCTURES CS Fall 2005.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 3 Polynomial and Rational Functions Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Key Concept 1. Example 1 Leading Coefficient Equal to 1 A. List all possible rational zeros of f (x) = x 3 – 3x 2 – 2x + 4. Then determine which, if any,
Three different ways There are three different ways to show that ρ(A) is a simple eigenvalue of an irreducible nonnegative matrix A:
MATH4248 Weeks Topics: review of rigid body motion, Legendre transformations, derivation of Hamilton’s equations of motion, phase space and Liouville’s.
Polynomials Integrated Math 4 Mrs. Tyrpak. Definition.
Number Theory Project The Interpretation of the definition Andre (JianYou) Wang Joint with JingYi Xue.
5 5.2 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors THE CHARACTERISTIC EQUATION.
2.4 Irreducible Matrices. Reducible is reducible if there is a permutation P such that where A 11 and A 22 are square matrices each of size at least one;
The process has correlation sequence Correlation and Spectral Measure where, the adjoint of is defined by The process has spectral measure where.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Vladimir Yu. Protasov (Moscow State University, Russia) The Spectral Simplex Method.
MA4229 Lectures 9, 10 Weeks 5-7 Sept 7 - Oct 1, 2010 Chapter 7 The theory of minimax approximation Chapter 8 The exchange algorithm.
On the Notion of Pseudo-Free Groups Ronald L. Rivest MIT Computer Science and Artificial Intelligence Laboratory TCC 2/21/2004.
CHARACTERIZATIONS OF INVERTIBLE MATRICES
Vladimir Protasov (Moscow State University) Primitivity of matrix families and the problem of distribution of power random series.
WORKSHOP ERCIM Global convergence for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
Approximation Algorithms based on linear programming.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
On the Notion of Pseudo-Free Groups
Combinatorial Spectral Theory of Nonnegative Matrices
Section 4.1 Eigenvalues and Eigenvectors
Polynomial Norms Amir Ali Ahmadi (Princeton University) Georgina Hall
Advanced Design and Analysis Techniques
Analytic Number Theory MTH 435
Nuclear Norm Heuristic for Rank Minimization
Subhash Khot Theory Group
Singular Value Decomposition SVD
Chapter 3 Canonical Form and Irreducible Realization of Linear Time-invariant Systems.
Linear Algebra Lecture 6.
MA5242 Wavelets Lecture 1 Numbers and Vector Spaces
Locality In Distributed Graph Algorithms
Chapter 2. Simplex method
2.6 Matrix with Cyclic Structure
Presentation transcript:

Vladimir Protasov (Moscow State University) Perron-Frobenius theory for matrix semigroups

NO: The matrices may have a common invariant subspace

Example: 321 Question 1. How to characterize all families that have positive products ? Question 2. Is it possible to decide the existence of a positive product within polynomial time ?

The case of one matrix (m=1) If a family {A} possesses a positive product, then some power of A is positive. Definition 1. A matrix is called primitive if it has a strictly positive power. Primitive matrices share important spectral and dynamical properties with positive matrices and have been studied extensively.

Perron-Frobenius theorem (1912) A matrix A is not primitive if it is either reducible or one of the following equivalent conditions is satisfied:

Can these results be generalized to families of m matrices or to multiplicative matrix semigroups ? One of the ways – strongly primitive families.

Strongly primitive families have been studied in many works. Applications: inhomogeneous Markov chains, products of random matrices, probabilistic automata, weak ergodicity in mathematical demography. There is no generalization of Perron -Frobenius theory to strongly primitive families The algorithmic complexity of deciding the strong primitivity of a matrix family is unclear. Most likely, this is not polynomial. Let N be the least integer such that all products of length N are positive. There are families of d x d – matrices, for which N = (Cohen, Sellers, 1982; Wu, Zhu, 2015) (compare with N = for one matrix)

Another generalization: the concept of primitive families. Definition 3. A family of matrices is called primitive if there exists at least one positive product. Justification. If the matrices of the family have neither zero columns no zero rows, then almost all long products are positive.

Question 1. How to characterize primitive families ? Question 2. Is it possible to decide the existence of a positive product within polynomial time ? Can the Perron-Frobenius theory be somehow generalized to primitive families ? The answers to both these questions are affirmative. (under some mild assumptions on matrices)

The main results (conjectured in 2010)

Proofs of Theorem 1 (2012) P., Voynov. By applying geometry of affine maps of convex polyhedra. (2013) Alpin, Alpina. Combinatorial proof. (2014) Blondel, Jungers, Olshevsky. Combinatorial proof. (2015) P., Voynov. By applying functional difference equations. Call for purely combinatorial proofs

What about the minimal length N of the positive product ?

Another generalization: m-primitive families Fornasini, Valcher (1997), Olesky, Shader, van den Driessche (2002), etc. The family is m-primitive if it has at least one positive Hurwitz product

Our approach can be extended to m-primitive families. Applications for graphs and for multivariate (2D, 3D, etc.) Markov chains. The complexity of recognition of m-primitive families was unclear. There is a criterion, which is highly non-polynomial. The proof is algebraic, it uses the theory of abelian groups

Applications of primitivity of matrix families inhomogeneous Markov chains products of random matrices, Lyapunov exponents, probabilistic automata, refinement functional equations mathematical ecology (succession models for plants) Products of random matrices, Lyapunov exponents Every choice is independent with equal probabilities 1/m (the simplest model)

This result was significantly strengthened by V.Oseledec (multiplicative ergodic theorem, 1968) The problem of computing the Lyapunov exponent is algorithmically undecidable (Blondel, Tsitsiclis, 2000)

In case of nonnegative matrices there are good results on both problems. 1) An analogue of the central limit theorem for matrices (Watkins (1986), Hennion (1997), Ishitani (1997)) 2) Efficient methods for estimating and for computing the Lyapunov exponent (Key (1990), Gharavia, Anantharam (2005), Pollicott (2010), Jungers, P. (2011)). All those results hold only for primitive families. The existence of at least one positive product is always assumed in the literature ``to avoid pathological cases ’’ Our Theorems 1 and 2 extend all those results to general families of nonnegative matrices.

Refinement equation is a difference functional equation with the contraction of an argument is a sequence of complex numbers sutisfying some constraints. This is a usual difference equation, but with the double contraction of the argument Refinement equations with nonnegative coefficients Applications: wavelets theory, approximation theory, subdivision algorithms, power random series, combinatorial number theory.

How to check if in case all the coefficients are nonnegative ? I.Daubechies, D.Lagarias, 1991 A.Cavaretta, W.Dahmen, C.Micchelli, 1991 C.Heil, D.Strang, 1994 R.Q.Jia, 1995, K.S.Lau, J.Wang, 1995 Y.Wang, 1996 Example. How to check the existence of a compactly supported solution ?

Conclusions In particular, to construct an efficient algorithm of computing the Lyapunov exponents of nonnegative matrices. Thus, if a family of matrices is not primitive, then all its matrices constitute permutations of the canonical partition. The canonical partition can be found by a fast algorithm. This allows us to extend many results on Lyapunov exponents to general families of nonnegative matrices. Thank you! Other applications: functional equations, succession models in mathematical ecology, etc.