An equivalent reduction of a 2-D symmetric polynomial matrix N. P. Karampetakis Department of Mathematics Aristotle University of Thessaloniki Thessaloniki.

Slides:



Advertisements
Similar presentations
8.4 Matrices of General Linear Transformations
Advertisements

Chapter 4 Euclidean Vector Spaces
Applied Informatics Štefan BEREŽNÝ
On the alternative approaches to ITRF formulation. A theoretical comparison. Department of Geodesy and Surveying Aristotle University of Thessaloniki Athanasios.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
RATRIX : A RATional matRIX calculator for computer aided analysis and synthesis of linear multivariable control systems P. Tzekis, N.P. Karampetakis and.
Linear Transformations
Function Optimization Newton’s Method. Conjugate Gradients
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
N. Karampetakis, S. Vologiannidis
3D Geometry for Computer Graphics
On the Realization Theory of Polynomial Matrices and the Algebraic Structure of Pure Generalized State Space Systems A.I.G. Vardulakis, N.P. Karampetakis.
A DESCRIPTOR SYSTEMS PACKAGE FOR MATHEMATICA
Algorithmic Problems in Algebraic Structures Undecidability Paul Bell Supervisor: Dr. Igor Potapov Department of Computer Science
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Eigen-decomposition of a class of Infinite dimensional tridiagonal matrices G.V. Moustakides: Dept. of Computer Engineering, Univ. of Patras, Greece B.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
Finding Eigenvalues and Eigenvectors What is really important?
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
On the fundamental matrix of the inverse of a polynomial matrix and applications N. P. Karampetakis S. Vologiannidis Department of Mathematics Aristotle.
On the computation of the GCD (LCM) of 2-d polynomials N. P. Karampetakis Department of Mathematics Aristotle University of Thessaloniki Thessaloniki 54006,
資訊科學數學11 : Linear Equation and Matrices
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University in Prague Methods for Solving Systems.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Math Dept, Faculty of Applied Science, HCM University of Technology
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Today’s class Boundary Value Problems Eigenvalue Problems
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Geometric Mean Decomposition and Generalized Triangular Decomposition Yi JiangWilliam W. HagerJian Li
Linear algebra: matrix Eigen-value Problems
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Low-Rank Kernel Learning with Bregman Matrix Divergences Brian Kulis, Matyas A. Sustik and Inderjit S. Dhillon Journal of Machine Learning Research 10.
Weikang Qian. Outline Intersection Pattern and the Problem Motivation Solution 2.
Lesson 3 CSPP58001.
ECE 546 – Jose Schutt-Aine 1 ECE 546 Lecture - 06 Multiconductors Spring 2014 Jose E. Schutt-Aine Electrical & Computer Engineering University of Illinois.
What is the determinant of What is the determinant of
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Linear Algebra Diyako Ghaderyan 1 Contents:  Linear Equations in Linear Algebra  Matrix Algebra  Determinants  Vector Spaces  Eigenvalues.
Projective Geometry Hu Zhan Yi. Entities At Infinity The ordinary space in which we lie is Euclidean space. The parallel lines usually do not intersect.
Motivation For analytical design of control systems,
Arab Open University Faculty of Computer Studies M132: Linear Algebra
Linear Algebra Diyako Ghaderyan 1 Contents:  Linear Equations in Linear Algebra  Matrix Algebra  Determinants  Vector Spaces  Eigenvalues.
By Josh Zimmer Department of Mathematics and Computer Science The set ℤ p = {0,1,...,p-1} forms a finite field. There are p ⁴ possible 2×2 matrices in.
Ch7_ Inner Product Spaces In this section, we extend those concepts of R n such as: dot product of two vectors, norm of a vector, angle between vectors,
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Algorithmic Problems in Algebraic Structures Undecidability Paul Bell Supervisor: Dr. Igor Potapov Department of Computer Science
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Singular Value Decomposition and its applications
Parallel Direct Methods for Sparse Linear Systems
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Euclidean Inner Product on Rn
Eigenvalues and Eigenvectors
Numerical Analysis Lecture 16.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Numerical Analysis Lecture10.
Eigenvalues and Eigenvectors
Elementary Linear Algebra Anton & Rorres, 9th Edition
RAYAT SHIKSHAN SANSTHA’S S.M.JOSHI COLLEGE HADAPSAR, PUNE
Eigenvectors and Eigenvalues
Presentation transcript:

An equivalent reduction of a 2-D symmetric polynomial matrix N. P. Karampetakis Department of Mathematics Aristotle University of Thessaloniki Thessaloniki 54006, Greece URL :

Contents Preliminaries  Problem statement for 1-D polynomial matrices Finite and infinite elementary divisor structure of 1-D polynomial matrices Problem statement and solution  Problem statement for 2-D polynomial matrices Invariant polynomials & zeros of 2-D polynomial matrices Zero coprime equivalence transformation and its invariants Zero coprime system equivalence and its invariants Problem statement 2-D symmetric polynomial matrix reduction procedure 2-D polynomial system matrix reduction procedure Implementation in Mathematica Conclusions

Problem Statement – 1-D polynomial matrices

Motivation Numerical methods that ignore the special structure of the polynomial matrix T(s) (like the companion form) often destroy these qualitatively important spectral symmetries, sometimes even to the point of producing physically meaningless or uninterpretable results. Storage and computational cost are reduced if a method that exploits symmetry is applied. i.e. The solution of Ax=B, with A symmetric, via a symmetric banded solver uses O(n) storage and O(n) flops, while using LU methods that not exploits symmetry uses O(n 2 ) storage and O(n 3 ) flops.

Problem Statement – 1-D polynomial matrices Solution of Problem 2 Higham et.al Symmetric linearizations for matrix polynomials.  The reduction is used for the solution of the polynomial eigenvalue problem T(s)x=0.  A vector space of symmetric pencils sE-A is generated with eigenvectors closely related to those of T(s).  No transformation is used. The matrix pencil proposed by Antoniou and Vologiannidis is not in the vector space of symmetric pencils proposed by Higham et.al.. Antoniou and Vologiannidis Linearizations of polynomial matrices with symmetries and their applications.  One specific symmetric linearization is proposed.  The new matrix pencil sE-A is connected with T(s) through a unimodular equivalence relation.

Problem Statement – 2-D polynomial matrices

Zero Coprime Equivalence transformation

Invariants of ZCE

Zero Coprime System Equivalence transformation

2-D polynomial matrix reduction procedure

2-D polynomial matrix reduction procedure - Example

2-D polynomial matrix reduction procedure

2-D polynomial matrix reduction procedure - Example

2-D polynomial matrix reduction procedure

2-D polynomial matrix reduction procedure - example

2-D polynomial matrix reduction procedure

2-D polynomial matrix reduction procedure - Example

2-D polynomial matrix reduction procedure

2-D polynomial matrix reduction procedure - Example

2-D polynomial matrix reduction procedure

2-D polynomial matrix reduction procedure - example Define the following companion form

2-D polynomial matrix reduction procedure - example

2-D reduction of polynomial system matrices

Implementation in Mathematica

Conclusions  A two-stage algorithm, easily implementable in a computer symbolic environment, has been provided for the reduction of a 2-D symmetric polynomial matrix to a zero coprime equivalent 2-D symmetric matrix pencil.  The results has also been adapted to 2-D system matrices.  Advantage.  We can use existing robust numerical algorithms for 2-D matrix pencils in order to compute structural invariants of 2-D symmetric polynomial matrices  Disadvantage.  The size of the matrices that we use.  An implementation of this algorithm in the package MATHEMATICA accompanied with one example is given.  Further research  Reduction of symmetric and positive definite polynomial matrices.  Use of other matrix pencil reduction methods (Higham et.al.).  New numerical techniques for investigating structural invariants of 2-D symmetric matrix pencils.  Infinite elementary divisor structure ?

Illustrative Example

Motivation – 1-D polynomial matrices we get Consider the homogeneous system Then by defining the following variables or equivalently

Motivation – 1-D polynomial matrices is known as the first companion form of T(ρ). and the following matrix pencil

Motivation – 1-D polynomial matrices in the sense that the compound matrices Note that the following extended unimodular equivalent transformation connects the polynomial matrix T(ρ) and the respective pencil ρE-A. do not lose rank in C. Conclusion. Since, T(ρ) and ρE-A are e.u.e. they possess the same finite elementary divisor structure.

Motivation – 1-D symmetric polynomial matrices Disandvantages. Numerical methods that ignore the special structure of the polynomial matrix T(ρ) (like the ones above) often destroy these qualitatively important spectral symmetries, sometimes even to the point of producing physically meaningless or uninterpretable results. Storage and computational cost are reduced if a method that exploits symmetry is applied i.e. The solution of Ax=B, with A symmetric, via a symmetric banded solver uses O(n) storage and O(n) flops, while using LU methods that not exploits symmetry uses O(n 2 ) storage and O(n 3 ) flops. Consider the symmetric polynomial matrix and the respective e.u.e. matrix pencil

Motivation – 1-D symmetric polynomial matrices we get the following homogeneous system Consider the homogeneous system Then by defining the following variables

Motivation – 1-D polynomial matrices is e.u.e. to ρE-A. where the symmetric matrix pencil

Motivation – 1-D polynomial matrices