Download presentation
Presentation is loading. Please wait.
Published byDortha Chase Modified over 9 years ago
1
A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White Massachusetts Institute of Technology
2
Outline Background Projection framework for model reduction Balanced Truncation algorithm and approximations AISIAD algorithm Description of the proposed algorithm Modified AISIAD and a low-rank square root algorithm Efficiency and accuracy Conclusions
3
Model reduction problem Reduction should be automatic Must preserve input-output properties Many (> 10 4 ) internal states inputsoutputs few (<100) internal states inputsoutputs
4
Differential Equation Model Model can represent: Finite-difference spatial discretization of PDEs Circuits with linear elements A – stable, n x n (large) E – SPD, n x n - state - vector of inputs - vector of outputs
5
Model reduction problem n – large (thousands)! Need the reduction to be automatic and preserve input-output properties (transfer function) q – small (tens)
6
Approximation error Wide-band applications: model should have small worst-case error ω => maximal difference over all frequencies
7
Projection framework for model reduction Pick biorthogonal projection matrices W and V Projection basis are columns of V and W Vx r x x n x xrxr V q W T AVx r Ax Most reduction methods are based on projection
8
LTI SYSTEM X (state) t u t y input output P (controllability) Which modes are easier to reach? Q (observability) Which modes produce more output? Reduced model retains most controllable and most observable modes Mode must be both very controllable and very observable Projection should preserve important modes
9
Reduced system: ( W T AV, W T B, CV, D ) Compute controllability and observability gramians P and Q : (~n 3 ) AP + PA T + BB T =0 A T Q + QA + C T C = 0 Reduced model keeps the dominant eigenspaces of PQ : (~n 3 ) PQ v i = λ i v i w i PQ = λ i w i Balanced truncation reduction (TBR) Very expensive. P and Q are dense even for sparse models
10
Arnoldi [Grimme ‘97]: V = colsp{A -1 B, A -2 B, …}, W=V T, approx. P dom only Padé via Lanczos [Feldman and Freund ‘95] colsp(V) = {A -1 B, A -2 B, …}, - approx. P dom colsp(W) = {A -T C T, (A -T ) 2 C T, …}, - approx. Q dom Frequency domain POD [Willcox ‘02], Poor Man’s TBR [Phillips ‘04] Most reduction algorithms effectively separately approximate dominant eigenspaces of P and Q : However, what matters is the product PQ colsp(V) = {(jω 1 I-A) -1 B, (jω 2 I-A) -1 B, …}, - approx. P dom colsp(W) = {(jω 1 I-A) -T C T, (jω 2 I-A) -T C T, …}, - approx. Q dom
11
RC line (symmetric circuit) Symmetric, P=Q all controllable states are observable and vice versa V(t) – input i(t) - output
12
RLC line (nonsymmetric circuit) P and Q are no longer equal! By keeping only mostly controllable and/or only mostly observable states, we may not find dominant eigenvectors of PQ Vector of states:
13
Lightly damped RLC circuit Exact low-rank approximations of P and Q of order < 50 leads to PQ ≈ 0!! R = 0.008, L = 10 -5 C = 10 -6 N=100
14
Lightly damped RLC circuit Union of eigenspaces of P and Q does not necessarily approximate dominant eigenspace of PQ. Top 5 eigenvectors of P Top 5 eigenvectors of Q
15
AISIAD model reduction algorithm Idea of AISIAD approximation: Approximate eigenvectors using power iterations: V i converges to dominant eigenvectors of PQ Need to find the product (PQ)V i X i = (PQ)V i => V i+1 = qr(X i ) “iterate” How?
16
Approximation of the product V i+1 =qr(PQV i ), AISIAD algorithm W i ≈ qr(QV i ) V i+1 ≈ qr(PW i ) Approximate using solution of Sylvester equation
17
More detailed view of AISIAD approximation Right-multiply by W i X X H, q x q (original AISIAD) M, n x q
18
X X H, q x q Modified AISIAD approximation Right-multiply by V i Approximate! M, n x q ^
19
Modified AISIAD approximation Right-multiply by V i We can take advantage of numerous methods, which approximate P and Q ! X X H, q x q Approximate! M, n x q ^
20
n x qn x q n x nn x n Specialized Sylvester equation A X + X H = -M q x qq x q Need only column span of X
21
Solving Sylvester equation Schur decomposition of H : A X + X = -M ~ ~ Solve for columns of X ~ ~ X
22
Solving Sylvester equation Applicable to any stable A Requires solving q times Schur decomposition of H : Solution can be accelerated via fast MVP Another methods exists, based on IRA, needs A>0 [Zhou ‘02]
23
Solving Sylvester equation Applicable to any stable A Requires solving q times Schur decomposition of H : For SISO systems and P = 0 equivalent to matching at frequency points –Λ(W T AW) ^
24
Modified AISIAD algorithm 1.Obtain low-rank approximations of P and Q 2.Solve AX i +X i H + M = 0, => X i ≈ PW i where H=W i T A T W i, M = P(I - W i W i T )A T W i + BB T W i 3. Perform QR decomposition of X i =V i R 4. Solve A T Y i +Y i F + N = 0, => Y i ≈ QV i where F=V i T AV i, N = Q(I - V i V i T )AV + C T CV i 5.Perform QR decomposition of Y i =W i+1 R to get new iterate. 6.Go to step 2 and iterate. 7.Bi-orthogonalize W and V and construct reduced model: ( W T AV, W T B, CV, D ) LR-sqrt ^^ ^ ^
25
For systems in the descriptor form Generalized Lyapunov equations: Lead to similar approximate power iterations
26
mAISIAD and low-rank square root Low-rank gramians LR-square root mAISIAD (inexpensive step) (more expensive) For the majority of non-symmetric cases, mAISIAD works better than low-rank square root (cost varies)
27
RLC line example results H-infinity norm of reduction error (worst-case discrepancy over all frequencies) N = 1000, 1 input 2 outputs
28
Steel rail coolling profile benchmark Taken from Oberwolfach benchmark collection, N=1357 7 inputs, 6 outputs
29
mAISIAD is useless for symmetric models For symmetric systems ( A = A T, B = C T ) P=Q, therefore mAISIAD is equivalent to LRSQRT for P,Q of order q RC line example ^^
30
Cost of the algorithm Cost of the algorithm is directly proportional to the cost of solving a linear system: (where s jj is a complex number) Cost does not depend on the number of inputs and outputs (non-descriptor case) (descriptor case)
31
Conclusions The algorithm has a superior accuracy and extended applicability with respect to the original AISIAD method Very promising low-cost approximation to TBR Applicable to any dynamical system, will work (though, usually worse) even without low-rank gramians Passivity and stability preservation possible via post-processing Not beneficial if the model is symmetric
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.