State Space Models
Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt + vt (The observation equation) xt = Btxt-1 + ut (The state equation) The time series { yt:t T} is said to have state-space representation.
Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying: E(ut) = E(vt) = 0. E(utus) = E(vtvs) = 0 if t ≠ s. E(utut) = Su and E(vtvt) = Sv. E(utvs) = E(vtus ) = 0 for all t and s.
Example: One might be tracking an object with several radar stations Example: One might be tracking an object with several radar stations. The process {xt:t T} gives the position of the object at time t. The process { yt:t T} denotes the observations at time t made by the several radar stations. As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t T}, from the observations, {yt:t T} , made by the several radar stations
Example: Many of the models we have considered to date can be thought of a State-Space models Autoregressive model of order p:
Define Then Observation equation and State equation
Hidden Markov Model: Assume that there are m states Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values. Suppose that the m states are denoted by the vectors:
Suppose that the n possible observations taken at each state are
Let and Note
Let So that The State Equation with
Also Hence and where diag(v) = the diagonal matrix with the components of the vector v along the diagonal
Since then and Thus
We have defined Hence Let
Then The Observation Equation with and
Hence with these definitions the state sequence of a Hidden Markov Model satisfies: The State Equation with and The observation sequence satisfies: The Observation Equation with and
Kalman Filtering
We will consider finding the “best” linear predictor. We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT. We will consider finding the “best” linear predictor. We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s. We will consider estimation of xt in terms of y1, y2, y3, … , yt-1 (the prediction problem) y1, y2, y3, … , yt (the filtering problem) y1, y2, y3, … , yT (t < T, the smoothing problem)
For any vector x define: where is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys. The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes
Remark: The best predictor is the unique vector of the form: Where C0, C1, C2, … ,Cs, are selected so that:
Remark: If x, y1, y2, … ,ys are normally distributed then:
Remark Let u and v, be two random vectors than is the optimal linear predictor of u based on v if
Kalman Filtering: Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt + vt xt = Bxt-1 + ut Again and
Then where One also assumes that the initial vector x0 has mean m and covariance matrix S an that
The covariance matrices are updated with
Summary: The Kalman equations 1. 2. 3. 4. 5. with and
Proof: Now hence Note
Let Let Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:
Hence where and Now
Also hence
Thus where Also
Hence The proof that will be left as an exercise.
Example: Suppose we have an AR(2) time series What is observe is the time series {ut|t T} and {vt|t T} are white noise time series with standard deviations su and sv.
This model can be expressed as a state-space model by defining: then
The equation: can be written Note:
The Kalman equations 1. 2. 3. 4. 5. Let
The Kalman equations 1.
2.
3.
4.
5.
Kalman Filtering (smoothing): Now consider finding These can be found by successive backward recursions for t = T, T – 1, … , 2, 1 where
The covariance matrices satisfy the recursions
The backward recursions 1. 2. 3. In the example: - calculated in forward recursion