Download presentation
Presentation is loading. Please wait.
1
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University NIPS 2007 poster W22
2
steam
3
Application: Dynamic Textures 1 Videos of moving scenes that exhibit stationarity properties Dynamics can be captured by a low-dimensional model Learned models can efficiently simulate realistic sequences Applications: compression, recognition, synthesis of videos steam river fountain 1 S. Soatto, D. Doretto and Y. Wu. Dynamic Textures. Proceedings of the ICCV, 2001
4
Linear Dynamical Systems Input data sequence
5
Linear Dynamical Systems Input data sequence
6
Linear Dynamical Systems Input data sequence Assume a latent variable that explains the data
7
Linear Dynamical Systems Input data sequence Assume a latent variable that explains the data
8
Linear Dynamical Systems Input data sequence Assume a latent variable that explains the data Assume a Markov model on the latent variable
9
Linear Dynamical Systems Input data sequence Assume a latent variable that explains the data Assume a Markov model on the latent variable
10
Linear Dynamical Systems 2 State and observation models: Dynamics matrix: Observation matrix: 2 Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Trans.ASME-JBE
11
Linear Dynamical Systems 2 State and observation models: Dynamics matrix: Observation matrix: This talk: - Learning LDS parameters from data while ensuring a stable dynamics matrix A more efficiently and accurately than previous methods 2 Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Trans.ASME-JBE
12
Learning Linear Dynamical Systems Suppose we have an estimated state sequence
13
Learning Linear Dynamical Systems Suppose we have an estimated state sequence Define state reconstruction error as the objective:
14
Learning Linear Dynamical Systems Suppose we have an estimated state sequence Define state reconstruction error as the objective: We would like to learn such that i.e.
15
Subspace Identification 3 Subspace ID uses matrix decomposition to estimate the state sequence 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
16
Subspace Identification 3 Subspace ID uses matrix decomposition to estimate the state sequence Build a Hankel matrix D of stacked observations 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
17
Subspace Identification 3 Subspace ID uses matrix decomposition to estimate the state sequence Build a Hankel matrix D of stacked observations 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
18
Subspace Identification 3 In expectation, the Hankel matrix is inherently low-rank! 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
19
Subspace Identification 3 In expectation, the Hankel matrix is inherently low-rank! 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
20
Subspace Identification 3 In expectation, the Hankel matrix is inherently low-rank! 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
21
Subspace Identification 3 In expectation, the Hankel matrix is inherently low-rank! Can use SVD to obtain the low-dimensional state sequence 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996
22
Subspace Identification 3 In expectation, the Hankel matrix is inherently low-rank! Can use SVD to obtain the low-dimensional state sequence 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996 = For D with k observations per column,
23
Subspace Identification 3 In expectation, the Hankel matrix is inherently low-rank! Can use SVD to obtain the low-dimensional state sequence 3 P. Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996 = For D with k observations per column,
24
Lets train an LDS for steam textures using this algorithm, and simulate a video from it! = [ …] x t 2 R 40
25
Simulating from a learned model xtxt t The model is unstable
26
Notation 1, …, n : eigenvalues of A (| 1 | > … > | n | ) 1,…, n : unit-length eigenvectors of A 1,…, n : singular values of A ( 1 > 2 > … n ) S matrices with | | · 1 S matrices with 1 · 1
27
Stability a matrix A is stable if | 1 | · 1, i.e. if
28
Stability a matrix A is stable if | 1 | · 1, i.e. if
29
Stability a matrix A is stable if | 1 | · 1, i.e. if | 1 | = 0.3 | 1 | = 1.295
30
Stability a matrix A is stable if | 1 | · 1, i.e. if | 1 | = 0.3 | 1 | = 1.295 x t (1), x t (2)
31
Stability a matrix A is stable if | 1 | · 1, i.e. if We would like to solve | 1 | = 0.3 | 1 | = 1.295 x t (1), x t (2) s.t.
32
Stability and Convexity But S is non-convex!
33
Stability and Convexity But S is non-convex! A1A1 A2A2
34
Stability and Convexity But S is non-convex! Lets look at S instead … – S is convex – S µ S A1A1 A2A2
35
Stability and Convexity But S is non-convex! Lets look at S instead … – S is convex – S µ S A1A1 A2A2
36
Stability and Convexity But S is non-convex! Lets look at S instead … – S is convex – S µ S Previous work 4 exploits these properties to learn a stable A by solving the semi-definite program 4 S. L. Lacy and D. S. Bernstein. Subspace identification with guaranteed stability using constrained optimization. In Proc. of the ACC (2002), IEEE Trans. Automatic Control (2003) A1A1 A2A2 s.t.
37
Lets train an LDS for steam again, this time constraining A to be in S
38
Simulating from a Lacy-Bernstein stable texture model xtxt t Model is over-constrained. Can we do better?
39
Formulate the S approximation of the problem as a semi-definite program (SDP) Start with a quadratic program (QP) relaxation of this SDP, and incrementally add constraints Because the SDP is an inner approximation of the problem, we reach stability early, before reaching the feasible set of the SDP We interpolate the solution to return the best stable matrix possible Our Approach
40
The Algorithm
41
objective function contours
42
The Algorithm A 1 : unconstrained QP solution (least squares estimate) objective function contours
43
The Algorithm A 1 : unconstrained QP solution (least squares estimate) objective function contours
44
The Algorithm A 1 : unconstrained QP solution (least squares estimate) A 2 : QP solution after 1 constraint (happens to be stable) objective function contours
45
The Algorithm A 1 : unconstrained QP solution (least squares estimate) A 2 : QP solution after 1 constraint (happens to be stable) A final : Interpolation of stable solution with the last one objective function contours
46
The Algorithm A 1 : unconstrained QP solution (least squares estimate) A 2 : QP solution after 1 constraint (happens to be stable) A final : Interpolation of stable solution with the last one A previous method : Lacy Bernstein (2002) objective function contours
47
Lets train an LDS for steam using constraint generation, and simulate …
48
Simulating from a Constraint Generation stable texture model xtxt t Model captures more dynamics and is still stable
49
Least SquaresConstraint Generation
50
Empirical Evaluation Algorithms: –Constraint Generation – CG (our method) –Lacy and Bernstein (2002) – LB-1 finds a 1 · 1 solution –Lacy and Bernstein (2003)– LB-2 solves a similar problem in a transformed space Data sets –Dynamic textures –Biosurveillance baseline models (see paper)
52
Reconstruction error Steam video % decrease in objective (lower is better) number of latent dimensions
53
Running time Steam video Running time (s) number of latent dimensions
54
Conclusion A novel constraint generation algorithm for learning stable linear dynamical systems SDP relaxation enables us to optimize over a larger set of matrices while being more efficient
55
Conclusion A novel constraint generation algorithm for learning stable linear dynamical systems SDP relaxation enables us to optimize over a larger set of matrices while being more efficient Future work: –Adding stability constraints to EM –Stable models for more structured dynamic textures
56
Thank you!
58
Subspace ID with Hankel matrices Stacking multiple observations in D forces latent states to model the future e.g. annual sunspots data with 12-year cycles = First 2 columns of U k = 1 k = 12 t t
59
The state space of a stable LDS lies inside some ellipse Stability and Convexity
60
The state space of a stable LDS lies inside some ellipse The set of matrices that map a particular ellipse into itself (and hence are stable) is convex Stability and Convexity
61
The state space of a stable LDS lies inside some ellipse The set of matrices that map a particular ellipse into itself (and hence are stable) is convex If we knew in advance which ellipse contains our state space, finding A would be a convex problem. But we don’t ? ? ? Stability and Convexity
62
The state space of a stable LDS lies inside some ellipse The set of matrices that map a particular ellipse into itself (and hence are stable) is convex If we knew in advance which ellipse contains our state space, finding A would be a convex problem. But we don’t …and the set of all stable matrices is non-convex ? ? ? Stability and Convexity
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.