Workshop on Dynamical Systems and Computation 1–1 Dynamical Systems for Extreme Eigenspace Computations Maziar Nikpour UCL Belgium.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Matrix Representation
8.2 Kernel And Range.
3D Geometry for Computer Graphics
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
6.1 Vector Spaces-Basic Properties. Euclidean n-space Just like we have ordered pairs (n=2), and ordered triples (n=3), we also have ordered n-tuples.
Adjoint Orbits, Principal Components, and Neural Nets Some facts about Lie groups and examples 2.Examples of adjoint orbits and a distance measure 3.Descent.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Solving Linear Systems (Numerical Recipes, Chap 2)
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
Lecture 17 Introduction to Eigenvalue Problems
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Eigenvalues and Eigenvectors (11/17/04) We know that every linear transformation T fixes the 0 vector (in R n, fixes the origin). But are there other subspaces.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Function Optimization Newton’s Method. Conjugate Gradients
6. One-Dimensional Continuous Groups 6.1 The Rotation Group SO(2) 6.2 The Generator of SO(2) 6.3 Irreducible Representations of SO(2) 6.4 Invariant Integration.
Tutorial 12 Unconstrained optimization Conjugate gradients.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Shawn Sickel A Comparison of some Iterative Methods in Scientific Computing.
Chapter 6 Eigenvalues.
數位控制(九).
Symmetric Definite Generalized Eigenproblem
Eigen-decomposition of a class of Infinite dimensional tridiagonal matrices G.V. Moustakides: Dept. of Computer Engineering, Univ. of Patras, Greece B.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
MA2213 Lecture 8 Eigenvectors. Application of Eigenvectors Vufoil 18, lecture 7 : The Fibonacci sequence satisfies.
Stats & Linear Models.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Algebra and Image Processing
Compiled By Raj G. Tiwari
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Autumn 2008 EEE8013 Revision lecture 1 Ordinary Differential Equations.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Quantum One: Lecture Representation Independent Properties of Linear Operators 3.
Linear algebra: matrix Eigen-value Problems
Assume correspondence has been determined…
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
LAHW#13 Due December 20, Eigenvalues and Eigenvectors 5. –The matrix represents a rotations of 90° counterclockwise. Obviously, Ax is never.
Adaptive Algorithms for PCA PART – II. Oja’s rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
ALGEBRAIC EIGEN VALUE PROBLEMS
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
MA2213 Lecture 8 Eigenvectors.
Krylov-Subspace Methods - I
Review of Matrix Operations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
A Comparison of some Iterative Methods in Scientific Computing
Autonomous Cyber-Physical Systems: Dynamical Systems
Chapter 3 Linear Algebra
Linear Algebra Lecture 32.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Performance Optimization
Eigenvalues and Eigenvectors
Presentation transcript:

Workshop on Dynamical Systems and Computation 1–1 Dynamical Systems for Extreme Eigenspace Computations Maziar Nikpour UCL Belgium

Workshop on Dynamical Systems and Computation 1–2 Co-workers Iven M. Y. Mareels Jonathan H. Manton University of Melbourne, Australia. Vadym Adamyan Odessa State University, Ukraine. Uwe Helmke University of Wurzberg, Germany.

Workshop on Dynamical Systems and Computation 1–3 Problem For Hermitian matrices (A, B), with B > 0; find the non-trivial solutions (, x) of with the smallest or largest generalised eigenvalues. n – size of matrices (A,B) k – no. of desired generalised eigenvalue/eigenvector pairs.

Workshop on Dynamical Systems and Computation 1–4 Outline Introduction Motivation Brief history of literature Penalty function approach Gradient flow Convergence Discrete-time Algorithms Applications Conclusions

Workshop on Dynamical Systems and Computation 1–5 Motivation Signal Processing Telecommunications Control Many others…

Workshop on Dynamical Systems and Computation 1–6 Brief History of Problem Numerical Linear Algebra Literature –Methods for general A and B: QZ algorithm, Moler and Stewart (what MATLAB does when you type ‘eig’) –Methods for large and sparse A, B. Trace minimisation method, Sameh & Wisiniewski, Engineering Literature Methods largely for computing largest/smallest generalised evs adaptively Mathew and Reddy 1998 (inflation approach, special case of approach in this work). Strobach, 2000 (tracking algorithms).

Workshop on Dynamical Systems and Computation 1–7 Brief History of Problem Dynamical systems literature –Brockett flow –Oja Above approaches cannot be adapted to the Generalised Eigenvalue problem without manipulating A and/or B. Recent paper by Manton et al. presents an approach that can…

Workshop on Dynamical Systems and Computation 1–8 Penalty Function Approach The minimisation of the following cost can lead to algorithms for computing extreme generalised evs.

Workshop on Dynamical Systems and Computation 1–9 Dynamical Systems for Numerical Computations Gradient descent like flows on a cost function. Discretisation of flows. Efficient numerical algorithms.

Workshop on Dynamical Systems and Computation 1–10 Examples Power flow: Oja subspace flow: Brockett flow:

Workshop on Dynamical Systems and Computation 1–11 Contributions Gradient flow on f (A, B) Discretisation of Gradient Flow –Steepest Descent –Conjugate Gradient Stochastic minor/principal component tracking algorithms The case B = I, and Z real has already been treated. (see Manton et al. 2003). Extending the domain to the complex matrices complicates the analysis substantially… Allowing B to be any p.d. matrix expands the range of applications…

Workshop on Dynamical Systems and Computation 1–12 Gradient Flow Main Result: For almost all initial conditions, solutions of converge to a single point in the stable invariant set of the flow.

Workshop on Dynamical Systems and Computation 1–13 Gradient Flow The stable invariant set is:

Workshop on Dynamical Systems and Computation 1–14 Critical Points of f (A, B) Hessian of f (A, B) is degenerate at critical points, N.B. Proposition:

Workshop on Dynamical Systems and Computation 1–15 Stability analysis of critical points Linear stability analysis will not suffice. Use center manifold theorem at each c.p. Proposition: Why? Nullspace of hessian of cost func. = Tangent space of critical subman.

Workshop on Dynamical Systems and Computation 1–16 Stability analysis of critical points Reduction principle of dynamical systems stable unstable center

Workshop on Dynamical Systems and Computation 1–17 Main result follows…. Proposition: level sets are compact => flow converges to one of the critical components. Center manifold thm. + reduction principle => converge to a single point on a critical component. Converges to stable invariant set for an open dense set of initial conditions. Stability analysis of critical points

Workshop on Dynamical Systems and Computation 1–18 Remarks Conditions used in proof => f (A, B) is a Morse-Bott function => solutions converge to a single point instead of a set (see Helmke & Moore, 1994). Also f (A, B) is a real analytic function (C n x k considered as a real vector space) => convergence to a single point (Lojasiewicz, 1984).

Workshop on Dynamical Systems and Computation 1–19 Further Remarks Generalised eigenvectors not unique but convergence to particular g.evs can be achieved by the following flow in reduced dimensions: where trunc{X} denotes X with imaginary components of diagonal set to 0. Flow converges to an element of critical component with real diagonal elements.

Workshop on Dynamical Systems and Computation 1–20 Systems of Flows Consider the system of cost functions:

Workshop on Dynamical Systems and Computation 1–21 Systems of Flows System of partial gradient descent flows allows the possibility to add or take away components without affecting the computation of others Proposition: Z(t) converges to smallest generalised eigenvalues for a generic initial condition.

Workshop on Dynamical Systems and Computation 1–22 Discrete-time algorithms Since flow evolves on a Euclidean space – discretisation is not complicated: Steepest descent: Conjugate gradient

Workshop on Dynamical Systems and Computation 1–23 Discrete-time algorithms Can solve the Hermitian definite GEVP without any factorisation or manipulation of A or B. Only matrix – small matrix multiplications are required. Suitable for cases where A and B are large and sparse. Conjugate gradient algorithm – superlinear convergence but no increase in order of computational complexity. Complexity O(n 2 k). Exact line search can be performed.

Workshop on Dynamical Systems and Computation 1–24

Workshop on Dynamical Systems and Computation 1–25 Discrete-time algorithms Tracking algorithm : O(nk 2 ) complexity when R nn = I. - signal plus noise model

Workshop on Dynamical Systems and Computation 1–26

Workshop on Dynamical Systems and Computation 1–27 Conclusion Proposing and deriving convergence theory of a gradient flow for solving GEVP. Modular system of flows. Discretisation: CG and SD algorithms. Application to Minor component tracking.

Workshop on Dynamical Systems and Computation 1–28 Questions