State Space Models.

Slides:



Advertisements
Similar presentations
State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Advertisements

Dates for term tests Friday, February 07 Friday, March 07
Probabilistic Reasoning over Time
Ch 7.7: Fundamental Matrices
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Russell and Norvig, AIMA : Chapter 15 Part B – 15.3,
OPTIMUM FILTERING.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
ELE Adaptive Signal Processing
Observers and Kalman Filters
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
SYSTEMS Identification
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Prepared By: Kevin Meier Alok Desai
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Maximum Likelihood Estimation
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Slam is a State Estimation Problem. Predicted belief corrected belief.
ARMA models Gloria González-Rivera University of California, Riverside
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Intro. ANN & Fuzzy Systems Lecture 26 Modeling (1): Time Series Prediction.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Learning Theory Reza Shadmehr Optimal feedback control stochastic feedback control with and without additive noise.
The process has correlation sequence Correlation and Spectral Measure where, the adjoint of is defined by The process has spectral measure where.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
An Introduction to Kalman Filtering by Arthur Pece
An Introduction To The Kalman Filter By, Santhosh Kumar.
Kalman Filtering And Smoothing
Review and Summary Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
Computational Finance II: Time Series K.Ensor. What is a time series? Anything observed sequentially (by time?) Returns, volatility, interest rates, exchange.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Other Models for Time Series. The Hidden Markov Model (HMM)
Dr. Thomas Kigabo RUSUHUZWA
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Models for Non-Stationary Time Series
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Time Series Analysis.
Hidden Markov Models.
ASEN 5070: Statistical Orbit Determination I Fall 2014
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
VAR models and cointegration
Kalman’s Beautiful Filter (an introduction)
Lecture 26 Modeling (1): Time Series Prediction
Kalman Filtering: Control with Limited/Noisy Measurements
Lecture 10: Observers and Kalman Filters
Hidden Markov chain models (state space model)
Hidden Markov Models Part 2: Algorithms
Hidden Markov Autoregressive Models
Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
§1-2 State-Space Description
Multivariate Time Series Analysis
Bayes and Kalman Filter
The Spectral Representation of Stationary Time Series
Principles of the Global Positioning System Lecture 13
§1—2 State-Variable Description The concept of state
Kalman Filter: Bayes Interpretation
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Presentation transcript:

State Space Models

Let { xt:t  T} and { yt:t  T} denote two vector valued time series that satisfy the system of equations: yt = Atxt + vt (The observation equation) xt = Btxt-1 + ut (The state equation) The time series { yt:t  T} is said to have state-space representation.

Note: { ut:t  T} and { vt:t  T} denote two vector valued time series that satisfying: E(ut) = E(vt) = 0. E(utus) = E(vtvs) = 0 if t ≠ s. E(utut) = Su and E(vtvt) = Sv. E(utvs) = E(vtus ) = 0 for all t and s.

Example: One might be tracking an object with several radar stations Example: One might be tracking an object with several radar stations. The process {xt:t  T} gives the position of the object at time t. The process { yt:t  T} denotes the observations at time t made by the several radar stations. As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t  T}, from the observations, {yt:t  T} , made by the several radar stations

Example: Many of the models we have considered to date can be thought of a State-Space models Autoregressive model of order p:

Define Then Observation equation and State equation

Hidden Markov Model: Assume that there are m states Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values. Suppose that the m states are denoted by the vectors:

Suppose that the n possible observations taken at each state are

Let and Note

Let So that The State Equation with

Also Hence and where diag(v) = the diagonal matrix with the components of the vector v along the diagonal

Since then and Thus

We have defined Hence Let

Then The Observation Equation with and

Hence with these definitions the state sequence of a Hidden Markov Model satisfies: The State Equation with and The observation sequence satisfies: The Observation Equation with and

Kalman Filtering

We will consider finding the “best” linear predictor. We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT. We will consider finding the “best” linear predictor. We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s. We will consider estimation of xt in terms of y1, y2, y3, … , yt-1 (the prediction problem) y1, y2, y3, … , yt (the filtering problem) y1, y2, y3, … , yT (t < T, the smoothing problem)

For any vector x define: where is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys. The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes

Remark: The best predictor is the unique vector of the form: Where C0, C1, C2, … ,Cs, are selected so that:

Remark: If x, y1, y2, … ,ys are normally distributed then:

Remark Let u and v, be two random vectors than is the optimal linear predictor of u based on v if

Kalman Filtering: Let { xt:t  T} and { yt:t  T} denote two vector valued time series that satisfy the system of equations: yt = Atxt + vt xt = Bxt-1 + ut Again and

Then where One also assumes that the initial vector x0 has mean m and covariance matrix S an that

The covariance matrices are updated with

Summary: The Kalman equations 1. 2. 3. 4. 5. with and

Proof: Now hence Note

Let Let Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:

Hence where and Now

Also hence

Thus where Also

Hence The proof that will be left as an exercise.

Example: Suppose we have an AR(2) time series What is observe is the time series {ut|t  T} and {vt|t  T} are white noise time series with standard deviations su and sv.

This model can be expressed as a state-space model by defining: then

The equation: can be written Note:

The Kalman equations 1. 2. 3. 4. 5. Let

The Kalman equations 1.

2.

3.

4.

5.

Kalman Filtering (smoothing): Now consider finding These can be found by successive backward recursions for t = T, T – 1, … , 2, 1 where

The covariance matrices satisfy the recursions

The backward recursions 1. 2. 3. In the example: - calculated in forward recursion