Linear dynamic systems
Introduction Example of a complex dynamic system
Example of a complex dynamical system
Example of a complex dynamical system (cnt’d)
Example of a complex dynamical system (cnt’d) ? ?
Oregon interaction data 69 normal and 72 depressed adolescents in laboratory Several 9 minute interaction tasks: two positive tasks (e.g., discuss nice time together), two reminiscence tasks (e.g., discuss salient aspects of adolescenthood and parenthood) and two conflict tasks (e.g., discuss familial conflicts)
Second-to-second physiological data collected (heart rate, blood pressure, skin conductance, etc.) and observational data (i.e., behavior coding: neutral, happy, angry, dysphoric)
Oregon interaction data set
Problem: cardiovascular system is nonlinear If we look at heart rate, it contains 1/f noise and is self-similar (--> fractals)
Question: Why study a linear dynamical system if the world is nonlinear?
Question: Why study a linear dynamical system if the world is nonlinear? you need to understand a linear system first linear methods are often basis of nonlinear ones they often work well as approximations often easier to apply in a noisy context measurement error unidentified influences computational efficient methods are developed Feynman: linear systems are important because we can solve them! (Lect on Physics)
Other examples
Overview of dynamical systems
What will we study today?
Simple dynamic system Falling ball: distance time2 t = time x(t) = position at time t = velocity at time t = acceleration at time t
x=0 x(t)
using Newton’s second law (no friction, small heigths)
thus, it follows that this is a linear second-order ordinary differential equation (ODE)
solution can be found analytically by integration first, consider it to be a first-order ODE in v(t) : if we know v(0), then C = v(0)
solution now, we have a first-order ODE in x(t) if we know x(0), then K = x(0) solution
this system is very simple analytically, but somewhat more complex to study in general because it is non-homogeneous Again, we can go from a second-order ODE to a system of first-order ODEs How? By defining the state variables x(t) = position v(t) = x’(t) = velocity
if we know the state variables, then we know the system in general, we can write this is a system of first order ODEs (replacing the single second-order ODE)
or also (using vector notation)
Another (more interesting) dynamic system mass-spring system x=0 x(t)
Newton’s second law combine this with Hooke’s law where k is the spring constant (large for stiff springs)
the ODE becomes or again a second-order ODE assume
How to study this dynamic system? Again, we can go from a second-order ODE to a system of first-order ODEs How? By defining the state variables x(t) = position v(t) = x’(t) = velocity if we know the state variables, then we know the system
dynamic system now becomes we can plot again a vector field in the state space or phase plane (i.e., the plane with x and v as coordinate axes) each point (x,v) is associated with (x’,v’) = (v, -2 x)
Exercise Plot the vector field.
imagine a flow of an imaginary fluid and place an imaginary particle in it watch how it is carried away if (x’(t),v’(t)) = (0,0), then this is the fixed point (for the mass-spring system this happens at (x,v)=(0,0) starting anywhere else leads to circulation in a closed orbit phase portrait of this system
Questions Based on this information, graph schematically the solution of the second-order ODE for x(t). How does the solution for v(t) look like? (also schematically)
for the general homogeneous system or several different kinds of phase portraits
to show that our the mass-spring system is an example hence
center star (unstable) saddle point stable spiral stable node degenerate node
we need to check the eigenvalues of A solve the characteristic equation: det(A-I)=0
p =(1-2)2 q
for 2 = 1 eigenvalues are pure imaginary complex numbers: 1=+i and 2=-i
Another more interesting dynamic system mass-spring system with damping
Now the dynamics are a little bit more complex where ODE:
Exercise Rewrite the ODE as a two-dimensional linear system What is the fixed point? Explore the possible phase portraits? Take in all cases m=1 and let c and k vary such that c2 > 4k c2 < 4k optional: c2 = 4k Relate this to “overdamped”, “critically damped”, and “underdamped”
Discrete time Almost all our measurements are in discrete time t0 < t1 < t2 < … < tk-1 < tk < tk+1 < … Possibly equally spaced t = tk – tk-1, for all k In such a case, the differential equation becomes a difference equation
As said earlier, this transition is not trivial instead of we (hope to) get
t t +t Let us do the exercise for the falling ball What happens when going from t to t+t ? initial position = x(t) initial velocity = v(t) t t +t new position = x(t+t) new velocity = v(t+t)
the transition equations for the state variables become: or in matrix notation
So in general
For the mass-spring system, the calculations are a little bit more complex, but not impossible In that case with where v1 and v2 are the eigenvectors of A and 1 and 2 are the corresponding eigenvalues
to conclude: we may use a discrete dynamical model for the linear system if we observe it at discrete times the systems we have talked about now can be represented schematically as follows u1(t) x1(t) xi(t) uj(t) xI(t) uJ(t)
the systems we have talked about now can be represented schematically as follows x1(t+t) xi(t +t) uj(t) xI(t +t) uJ(t)
Tom will make it a little bit more complicated: adding an observation equation adding error to both equations
Kalman filter for the local level model
Remember the local level model with
define Can we find (recursive) expressions for t and Vt? The solution will be the celebrated Kalman filter
The problem is we have observations and a current estimate of t-1, now yt becomes available how should we update our estimate of t
Let us first remark that since all distributions we start from are normal, all derived distributions will be normal as well Then using Bayes’ theorem
Let us focus on (*)
Inserting this result then gives You may recognize the traditional Bayesian structure here: a normal observation yt with mean t and a normal prior on t
This gives for the posterior
Using some elementary algebra, this can be formulated as follows: these are the Kalman recursive formulas with Kt being called the Kalman gain
Exercise: Simulate data from the local level model and try to apply the Kalman filter. Plot the data and draw the filtered states through it. Plot the filtered state variance Pt. Plot the errors.