Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable – Hidden Stationary process (not static) Markovian Property (current state depends only on finite history – typically just previous time slice) Transition Model P(current state/previous state) Sensor/Observation Model P(evidence/current state) ICASSP 2013 tutorial1
Inference tasks in temporal models Filtering: posterior distribution over current state given evidence = likelihood of evidence Prediction: posterior distribution of future state given evidence to date Smoothing: posterior distribution of past state given all evidence up to the present Most likely explanation: given sequence of observations, most likely sequence of states that has generated them EM-algorithm – Estimate what transitions occurred and what states generated the sensor reading and update models – Updated models provide new estimates and process iterated until convergence ICASSP 2013 tutorial2
Hidden Markov Models I ICASSP 2013 tutorial3 Uncertainty and Time Hidden p( | ) Observed Model P( | ) tt-1 Transition Probs t Emission Probs MODEL Observations Hidden State (single discrete variable)
Kalman Filtering Streams of noisy input data Basic idea t->t+1 : – Prior knowledge of state – Prediction step (based on some model) – Update step (compare prediction to measurements) – Readjust model – Output estimate of state Statistically optimal estimate of system state Particle filters are another approach ICASSP 2013 tutorial4 Uncertainty and Time
Kalman Filter Linear Gaussian conditional distributions represent state and sensor models LG: P(x/y)=N(a y y + b y, σ y )(c) Next state is linear function of current state plus some Gaussian noise i.e constant dx/dt Forward step: mean + covariance matrix at t produces mean + covariance matrix at t+1 Trade-off between observation reliability and model reliability Variants to relax strong assumptions: switching, extended ICASSP 2013 tutorial5