A TBR-based Trajectory Piecewise-Linear Algorithm for Generating Accurate Low-order Models for Nonlinear Analog Circuits and MEMS Dmitry Vasilyev, Michał.

Slides:



Advertisements
Similar presentations
Running a model's adjoint to obtain derivatives, while more efficient and accurate than other methods, such as the finite difference method, is a computationally.
Advertisements

3D Geometry for Computer Graphics
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) Dimensionality Reductions or data projections Random projections.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Review Resources: Wiki: State Variables YMZ: State Variable Technique Wiki: Controllability.
Data mining and statistical learning - lecture 6
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Linear Transformations
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Note 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
3D Geometry for Computer Graphics
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
Curve-Fitting Regression
Chapter 7 Reading on Moment Calculation. Time Moments of Impulse Response h(t) Definition of moments i-th moment Note that m 1 = Elmore delay when h(t)
UCSD CSE 245 Notes – SPRING 2006 CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2006 Prof.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Lecture 24 Introduction to state variable modeling Overall idea Example Simulating system response using MATLAB Related educational modules: –Section 2.6.1,
Chapter 12 Fast Fourier Transform. 1.Metropolis algorithm for Monte Carlo 2.Simplex method for linear programming 3.Krylov subspace iteration (CG) 4.Decomposition.
1 Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology
Chapter 2 Dimensionality Reduction. Linear Methods
Dynamical Systems 2 Topological classification
1 Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology
Iterative factorization of the error system in Moment Matching and applications to error bounds Heiko Panzer, Thomas Wolf, Boris Lohmann GAMM-Workshop.
Introduction to the Hankel -based model order reduction for linear systems D.Vasilyev Massachusetts Institute of Technology, 2004.
SVD: Singular Value Decomposition
Linear algebra: matrix Eigen-value Problems
A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White Massachusetts Institute of Technology.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Progress in identification of damping: Energy-based method with incomplete and noisy data Marco Prandina University of Liverpool.
Decentralized Model Order Reduction of Linear Networks with Massive Ports Boyuan Yan, Lingfei Zhou, Sheldon X.-D. Tan, Jie Chen University of California,
§ Linear Operators Christopher Crawford PHY
Technical Report of Web Mining Group Presented by: Mohsen Kamyar Ferdowsi University of Mashhad, WTLab.
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo System Solutions y(t) t +++++… 11 22.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
What is the determinant of What is the determinant of
Large Timestep Issues Lecture 12 Alessandra Nardi Thanks to Prof. Sangiovanni, Prof. Newton, Prof. White, Deepak Ramaswamy, Michal Rewienski, and Karen.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Perturbation analysis of TBR model reduction in application to trajectory-piecewise linear algorithm for MEMS structures. Dmitry Vasilyev, Michał Rewieński,
BART VANLUYTEN, JAN C. WILLEMS, BART DE MOOR 44 th IEEE Conference on Decision and Control December 2005 Model Reduction of Systems with Symmetries.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
Chapter 2 Interconnect Analysis Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu
Variations on Backpropagation.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2D-LDA: A statistical linear discriminant analysis for image matrix
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
State Space Models The state space model represents a physical system as n first order differential equations. This form is better suited for computer.
Nonlinear balanced model residualization via neural networks Juergen Hahn.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Model Reduction Techniques in Neuronal Simulation Richard Hall, Jay Raol and Steven J. Cox Model Reduction Techniques in Neuronal Simulation Richard Hall,
DAC, July 2006 Model Order Reduction of Linear Networks with Massive Ports via Frequency-Dependent Port Packing Peng Li and Weiping Shi Department of ECE.
Chapter 2 Interconnect Analysis
Dmitry Vasilyev Thesis supervisor: Jacob K White
Autonomous Cyber-Physical Systems: Dynamical Systems
Digital Control Systems (DCS)
Parallelization of Sparse Coding & Dictionary Learning
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Lecture 13: Singular Value Decomposition (SVD)
Feature Selection Methods
Presentation transcript:

A TBR-based Trajectory Piecewise-Linear Algorithm for Generating Accurate Low-order Models for Nonlinear Analog Circuits and MEMS Dmitry Vasilyev, Michał Rewieński, Jacob White Massachusetts Institute of Technology

Outline Background Trajectory-piecewise linear (TPWL) framework for model order reduction Choice of projection bases TBR-based reduction procedure for TPWL model reduction Examples and computational results Issues in selecting order of the model Efficiency and accuracy Future work and conclusions

Differential Equation Model Original complex model: Model can represent: Finite-difference spatial discretization of PDEs Circuits with linear capacitors and inductors Need accurate input-output behavior

Model reduction problem Original complex model: Reduced model: Requirements for reduced model Want q << n (cost of simulation is q3) Want yr(t) to be close to y(t)

Projection basis approach to reduction Pick biorthogonal projection matrices W and V Projection basis are columns of V and W Yields inefficient representation for f r Evaluating WTf(Vxr) requires order n operations x Vxr=x n x V xr q f f r=WTf

Trajectory Piecewise Linear approximation of f( ) [Rewieński, 2001] Training trajectory x0 x2 x1 … wi(x) is zero outside circle xn Simulating trajectory

Projection and TPWL approximation yields efficient f r( ) q x 1 Air Ai WT V = q Air q n n

TPWL approximation of f( ). Extraction algorithm Compute A1 Obtain W1 and V1 using linear reduction for A1 Simulate training input, collect and reduce linearizations Air = W1TAiV1 f r (xi)=W1Tf(xi) Initial system position x0 x2 x1 … xn Training trajectory Non-reduced state space

Linearized system has nonsymmetric, indefinite Jacobian Example problem RLC line Linearized system has nonsymmetric, indefinite Jacobian

Numerical results – nonlinear RLC transmission line System response for input current i(t) = (sin(2π/10)+1)/2 Input: training input testing input Voltage at node 1 [V] Time [s]

Outline Background Trajectory-piecewise linear (TPWL) framework for model order reduction Choice of projection bases TBR-based reduction procedure for TPWL model reduction Examples and computational results Issues in selecting order of the model Efficiency and accuracy Future work and conclusions

Key issue: choosing projection basis Krylov-subspace methods Fast Don’t guarantee accuracy Balanced-truncation methods Expensive (~n3) Guarantee accuracy

Key issue: choosing projection Krylov-subspace methods Balanced-truncation methods Result: projection matrices W and V

The matter of this presentation Which method more suitable for TPWL? Krylov-subspace methods Balanced-truncation methods Can we use it? Our presentation aims to answer this question Used in previous works

Reminder: TBR reduction algorithm Given linear system (A, B, C) Compute Controllability and observability gramians P and Q Compute Cholesky factor of P: P = RTR Compute SVD of RQRT: UΣ2UT = RQRT Diagonal values of Σ are called the Hankel singular values Projection basis V contains first r columns of the balancing transformation T = RTU Σ-1/2

Our Approach: Use TPWL to handle nonlinearity Use TBR for projection matrices W and V x0 x2 x1 … xn

Outline Background Trajectory-piecewise linear (TPWL) framework for model order reduction Choice of projection bases TBR-based reduction procedure for TPWL model reduction Examples and computational results Issues in selecting order of the model Efficiency and accuracy Future work and conclusions

Numerical results – RLC transmission line TBR-based TPWL beat Krylov-based 4-th order TBR TPWL reaches the limit of TPWL representation Error in transient ||yr – y||2 Order of the reduced model

Micromachined device example FD model non-symmetric indefinite Jacobian

TPWL-TBR results – MEMS switch example Errors in transient Unstable! Odd order models unstable! Even order models beat Krylov ||yr – y||2 Why??? Order of reduced system

Outline Background Trajectory-piecewise linear (TPWL) framework for model order reduction Choice of projection bases TBR-based reduction procedure for TPWL model reduction Examples and computational results Issues in selecting order of the model Efficiency and accuracy Future work and conclusions

Illustrating even-odd behavior for MEMS beam example Observation: First-point Jacobian has many complex-conjugate eigenvalues. Just curious: How complex-conjugate pairs are represented by TPWL models?

Eigenvalue behavior of linearized models Eigenvalues of reduced Jacobians, q=7 Eigenvalues of reduced Jacobians, q=8 TBR is adding complex-conjugate pair

Hankel singular values, MEMS beam example This is the key to the problem. Singular values are arranged in pairs! # of the Hankel singular value

Explanation of even-odd effect – Problem statement Consider two LTI systems: Initial: ( ) Perturbed: ( ) TBR reduction TBR reduction ~ Projection basis V Projection basis V Define our problem: How perturbation in the initial system affects TBR projection basis?

TBR reduction algorithm Compute Controllability and observability gramians P and Q Compute Cholesky factor of P: P = RTR Compute SVD of RQRT: UΣ2UT = RQRT Projection basis V is first r columns of the matrix T = RTU Σ-1/2 Our goal: How perturbation in the initial system affects balancing transformation T ?

TBR reduction algorithm Perturbation behavior of TBR projection is dictated by: 3) Compute SVD of RQRT: UΣ2UT = RQRT Symmetric eigenvalue problem for RQRT

Perturbation theory for symmetric eigenvalue problem Eigenvectors of M0 : Eigenvectors of M0 + Δ : Mixing of eigenvectors (assuming small perturbations): cik large when λi0 ≈ λk0

Explaining even-odd behavior The closer Hankel singular values lie to each other, the more corresponding eigenvectors of V tend to intermix! Analysis implies simple recipe for using TBR Pick reduced order to insure Remaining Hankel singular values are small enough The last kept and first removed Hankel Singular Values are well separated Helps insure that all linearizations stably reduced

Outline Background Trajectory-piecewise linear (TPWL) framework for model order reduction Choice of projection bases TBR-based reduction procedure for TPWL model reduction Examples and computational results Issues in selecting order of the model Efficiency and accuracy Future work and conclusions

Reducing cost of TBR reduction - Combined Krylov-TBR algorithm Krylov reduction (Wi , Vi): Ai = WiTAVi Bi = WiTB Ci = CVi Initial Model: (A B C), n Intermediate Model: (Ai Bi Ci), ni TBR reduction (Wt , Vt): Ar = WtTAVt Br = WtTB Cr = CVt Reduced Model: (Ar Br Cr), q Experiments showed no difference between TBR and this algorithm

Performance of Krylov- TBR TPWL MOR extraction procedures* Initial model size N TBR TPWL q=6 Krylov-TBR TPWL, Krylov q = 30 1500 1268 s 30.57 s 26.34 s 800 181.8 s 8.57 s 7.75 s 400 23.75 s 2.73 s 3.03 s Cost of Krylov-TBR almost equals Krylov *Matlab implementation

Comparing accuracies of Krylov TPWL method and TBR-based TPWL algorithm Accuracy in transient * Order of the reduced model needed to achieve this accuracy for our models Krylov-based TPWL TBR-based TPWL 5% 12 2 1% 20 4 0.5% >30 6 5x reduction in order – 125x improvement in efficiency *Testing input equal to training input

Proposed improvement W1 , V1 W2 , V2 Wagg , Vagg To aggregate projection bases: Biorthogonalization W1TV1 = Ik1 × k1 W2TV2 = Ik2 × k2 make WaggTVagg = IN_agg × N_agg Nagg ≤ k1 + k2 W1 , V1 W2 , V2 Wagg , Vagg Question. How to remove redundant directions? (in case of Krylov reduction we used SVD, since Krylov uses orthogonal projection)

Future work Are TBR-based TPWL models valid for unstable linearizations? What about systems having the following form (i.e. circuits with nonlinear capacitors):

Conclusions In this work we used TBR-based linear reduction procedure to generate TPWL reduced models Order reduced 5 times while maintaining comparable accuracy with Krylov TPWL method (efficiency improved 125 times!) Combined Krylov-TBR reduction allows to extract TPWL models at low cost One should take care of repeated or almost equal Hankel singular values when applying this method.