Download presentation
Presentation is loading. Please wait.
Published byErin Tyler Modified over 9 years ago
1
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University
2
Outline Introduction Algorithms Multivariate Newton ’ s method Use of the linear eigenvalue of smallest modulus Use of the signed smallest singular value Numerical experiments Conclusion
3
Introduction Nonlinear eigenvalue problem Let A( ) be an n by n matrix whose elements depend on a scalar parameter In the nonlinear eigenvalue problem, we seek a value of for which there exists a nonzero vector x such that and x are called the (nonlinear) eigenvalues and eigenvectors, respectively. Examples A( ) = A – I : Linear eigenvalue problem A( ) = 2 M + C + K: Quadratic eigenalue problem A( ) = (e –1) A 1 + A 2 – A 3 : General nonlinear eigenvalue problem A( ) x = 0.
4
Applications of the nonlinear eigenvalue problem Structural mechanics Decay system: 2 Mx + Cx + Kx = 0 M : mass matrix, C : damping matrix, K : stiffness matrix Electronic structure calculation Kohn-Sham equation: H( ) = H( ) Kohn-Sham Hamiltonian Theoretical fluid dynamics Computation of the scaling exponent in turbulent flow
5
Solution of a quadratic eigenproblem Transformation to a linear generalized eigenproblem Quadratic eigenproblem can be transformed into a linear generalized eigenproblem of twice the size. In general, nonlinear eigenproblem of order k can be transformed into a linear generalized eigenproblem of k times the size. Efficient algorithms for linear eigenproblems (QR, Krylov subspace method, etc.) can be applied. 2 Mx + Cx + Kx = 0
6
Our target problem Computation of the scaling exponent of a passive scalar field in a turbulent flow Governing equation of the passive scalar n -point correlation function Scaling exponent We are interested in computing n in the case of n = 4. n (sx 1, sx 2, …, sx n ) = s n n (x 1, x 2, …, x n ) Turbulent flow
7
Our target problem (cont ’ d) The PDE satisfied by 4 By writing 4 = s 4 4, we have Non-linearity comes from folding the (unbounded) domain of ( x 1, x 2, x 3, x 4 ) into a bounded one using the scaling law. F 4 = 0, where ( 4 ( 4 -1)A + 4 B + C) 4 = 0 : quadratic eigenproblem 4 (s, s 1, s 2, …, s 5 ) = 1 4 4 (s/ 1, s, s 2 / 1, …, s 5 / 1 )
8
Our target problem (cont ’ d) Problem characteristics A( ) is large ( n ~ 10 5 ), sparse and nonsymmetric. Dependence of A( ) on is fully nonlinear (includes both exponential and polynomial terms in ), but is analytical. Computation of A( ) takes a long time. The smallest positive eigenvalue is sought.
9
Solution of a general nonlinear eigenproblem Necessary and sufficient condition for A simple approach (Gat et al., 1997; the case of n = 3 ) Find the solution of det(A( )) = 0 by Newton’s method. Find the eigenvector x as the null space of a constant matrix A( ). Difficulty Computation of det(A( )) requires roughly the same cost as the LU decomposition of A( ). Not efficient when A( ) is large and sparse. x 0, A( ) x = 0 det(A( )) = 0
10
Equations to be solved Iteration formula Approach based on multivariate Newton ’ s method Basic idea Regard A( ) x = 0 as nonlinear simultaneous equations w.r.t. n+1 variables and x and solve them by Newton’s method. Since there are only n equations, we add a normalization condition v t x = 1 using some vector v.
11
Multivariate Newton ’ s method (cont ’ d) Advantages Each iteration consists of solving linear simultaneous equations. Much cheaper than computing det(A( )). Convergence is quadratic if the initial values and x are sufficiently close to the solution. Disadvantages The iterates may converge to unwanted eigenpairs or fail to converge unless both and x are sufficiently good. It is in general difficult to find a good initial value x for the eigenvector. A'( ) is necessary in addition to A( ).
12
Approaches based on the linear eigenvalue /singular value of smallest modulus Definition For a fixed, we call a linear eigenvalue of A( ) if there exists a nonzero vector y such that A( )y = y. For a fixed, we call >0 a linear singular value of A( ) if 2 is a linear eigenvalue of A( ) T A( ). Linear eigenvalue / singular value are simply an eigenvalue / singular value of A( ) viewed as a constant matrix. and are functions of. Necessary and sufficient conditions for x 0, A( ) x = 0 det(A( )) = 0 A( ) has a zero linear eigenvalue. A( ) has a zero linear singular value.
13
Approaches based on the linear eigenvalue /singular value of smallest modulus (cont ’ d) A possible approach Let ( ) = the linear eigenvalue of smallest modulus of A( ), ( ) = the smallest linear singular value of A( ). Find the solution to ( ) = 0 or ( ) = 0. Find the eigenvector x as the null space of a constant matrix A( ). Advantages Only the initial value for is required. ( ) and ( ) can be computed much more cheaply than det(A( )). Use of the Lanczos, Arnoldi, and Jacobi-Davidson methods A'( ) is not necessary if the secant method is used to find.
14
Approach based on the linear eigenvalue of smallest modulus Algorithm based on the secant method Difficulty When A( ) is nonsymmetric, computing ( ) is expensive. Though it is much less expensive than computing det(A( )). ・ Set two initial values and . ・ Repeat the following until | ( l )| becomes sufficiently small: ・ Find the eigenvector x as the null space of a constant matrix A( l ). ( ) l–1 l–2 l
15
Approach based on the smallest linear singular value Possible advantages For nonsymmetric matrices, singular values can be computed much more easily than eigenvalues. Problems The linear singular value ( ) of A( ) is defined as the positive square root of the linear eigenvalue of A( ) T A( ). Hence, ( ) is not smooth at ( ) = 0. The secant method cannot be applied. Solution Modify the definition of ( ) so that it is smooth near ( ) = 0. Analytical singular value Signed smallest singular value ( ) l–1 l–2 l
16
Analytical singular value decomposition Theorem 1 (Bunse-Gerstner et al., 1991) Let the elements of A( ) be analytical functions of. Then there exist orthogonal matrices U’( ) and V’( ) and a diagonal matrix ’( ) = diag( 1 ’( ), 2 ’( ), …, n ’( ) ) whose elements are analytical functions of and which satisfy This is called the analytical singular value decomposition of A( ). Notes Analytical singular values may be negative. In general, 1 ’( ) > 2 ’( ) >... > n ’( ) does not hold. Analytical singular values are expensive to compute. Requires the solution of ODE ’ s starting from some initial point . ( ) U’( ) ’( ) V’( ) T. i ’( ) 0 1 ’( ) 2 ’( ) 3 ’( )
17
Signed smallest singular value Definition Let v n and u n be the right and left singular vectors of A( ) corresponding to the smallest linear singular value n. Then we call n = n sgn(v n T u n ) the signed smallest singular value of A( ). Theorem 2 Assume that n ( ) is a simple root and |v n ( ) T u n ( )| 0 in an interval . Then the signed smallest singular value n ( ) sgn(v n ( ) T u n ( )) is an analytic function of in this interval. Proof From the uniqueness of SVD, u n ( ) = u n ’( ) and v n ( ) = v n ’( ). Hence, The right-hand-side is clearly analytical when |v n ( ) T u n ( )| 0... n ( ) = n ( ) sgn(v n ( ) T u n ( )) = n ( )v n T ( )u n ( ) / |v n T ( )u n ( )| = v n T ( )A( )v n ( ) / |v n T ( )u n ( )| = v n ’ T ( )A( )v n ’( ) / |v n ’ T ( )u n ’( )|..
18
Approach based on the signed smallest singular value Characteristics of the signed smallest singular value n ( ) n ( ) = 0 n ( ) = 0 n ( ) is an analytical function of under suitable assumptions. Easy to compute (requires only n ( ), v n ( ) and u n ( ) ) Algorithm based on the secant method ( ) l–1 l–2 l ・ Set two initial values and . ・ Repeat the following until | ( l )| becomes sufficiently small: ・ Find the eigenvector x as the null space of a constant matrix A( l ).
19
Numerical experiments Test problem Computation of the scaling exponent in turbulent flow Matrix size is 35,000 and 100,000. Seek the smallest positive (nonlinear) eigenvalue It is known that the eigenvalue is in [0, 4], but the estimate for the (nonlinear) eigenvector is unknown. Computational environment Fujitsu PrimePower HPC2500 (16PU)
20
Algorithm I: Approach based on the linear eigenvalue of smallest modulus Result for n=35,000 Nonlinear eigenvalue: 2.926654 Computational time : 35,520 sec. for each value of. Secant iteration: 4 times ( )
21
Algorithm II: Approach based on the signed smallest singular value Result for n=35,000 Result for n=100,000 Computational time : 16,200 sec. for each value of. Could not be computed with Algorithm 1 because the computational time was too long. ( ) Nonlinear eigenvalue : 2.926654 Computational time: 2,005 sec. for each value of. (1/18 of Algorithm 1) Secant iteration: 4 times
22
Conclusion Summary of this study We proposed an algorithm for large sparse nonlinear eigenproblem based on the signed smallest positive singular value. The algorithm proved much faster than the method based on the linear eigenvalue of smallest modulus. Future work Application of the algorithm to various nonlinear eigenproblems
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.