Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.

Slides:



Advertisements
Similar presentations
Motivating Markov Chain Monte Carlo for Multiple Target Tracking
Advertisements

Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
Oklahoma State University Generative Graphical Models for Maneuvering Object Tracking and Dynamics Analysis Xin Fan and Guoliang Fan Visual Computing and.
TOWARD DYNAMIC GRASP ACQUISITION: THE G-SLAM PROBLEM Li (Emma) Zhang and Jeff Trinkle Department of Computer Science, Rensselaer Polytechnic Institute.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
A brief Introduction to Particle Filters
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Today Introduction to MCMC Particle filters and MCMC
Comparative survey on non linear filtering methods : the quantization and the particle filtering approaches Afef SELLAMI Chang Young Kim.
End of Chapter 8 Neil Weisenfeld March 28, 2005.
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
CHEN RONG ZHANG WEN JUN.  Introduction and Features  Price model including Bayesian update  Optimal trading strategies  Coding  Difficulties and.
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Particle Filtering in Network Tomography
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
1 Mohammed M. Olama Seddik M. Djouadi ECE Department/University of Tennessee Ioannis G. PapageorgiouCharalambos D. Charalambous Ioannis G. Papageorgiou.
Markov Localization & Bayes Filtering
Brian Renzenbrink Jeff Robble Object Tracking Using the Extended Kalman Particle Filter.
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Particle Filtering (Sequential Monte Carlo)
Computer vision: models, learning and inference Chapter 19 Temporal models.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Probabilistic Robotics Bayes Filter Implementations.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
HQ U.S. Air Force Academy I n t e g r i t y - S e r v i c e - E x c e l l e n c e Improving the Performance of Out-of-Order Sigma-Point Kalman Filters.
Sanjay Patil 1 and Ryan Irwin 2 Intelligent Electronics Systems, Human and Systems Engineering Center for Advanced Vehicular Systems URL:
-Arnaud Doucet, Nando de Freitas et al, UAI
Sanjay Patil and Ryan Irwin Intelligent Electronics Systems, Human and Systems Engineering Center for Advanced Vehicular Systems URL:
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
CS Statistical Machine learning Lecture 24
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Week Aug-24 – Aug-29 Introduction to Spatial Computing CSE 5ISC Some slides adapted from the book Computing with Spatial Trajectories, Yu Zheng and Xiaofang.
SLAM Tutorial (Part I) Marios Xanthidis.
Nonlinear State Estimation
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
CSC321: Introduction to Neural Networks and Machine Learning Lecture 17: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
Sanjay Patil and Ryan Irwin Intelligent Electronics Systems, Human and Systems Engineering Center for Advanced Vehicular Systems URL:
Particle filters for Robot Localization An implementation of Bayes Filtering Markov Localization.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Introduction to Sampling based inference and MCMC
Particle Filtering for Geometric Active Contours
PSG College of Technology
Filtering and State Estimation: Basic Concepts
Presentation transcript:

Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003

Structure Introduce the originals of the problem Mixture Kalman Filters (MKF) model setup and method Two related extended models w/ examples Applications to show the advantages of MKF Conclusions

Original of the problem Interest in on-line estimation and prediction of the dynamic changing system. (Hidden pattern along observations) Kalman filter (1960) technique can OK Gaussian linear system. How about non-linear & non-Gaussian system? Sequential Monte Carlo approach including: Bootstrip filter / practical filter; Sequential imputation; From on now, call it as “Monte Carlo filters” Mixed kalman filter, the role of this paper. Will see the comparisons.

Original of the problem Before we start MKF, recall the task:

MKF model setup Conditional dynamic linear model (CDLM): Given trajectory of an indicator variable, the system is Gaussian & linear--  can derive a MC filter focusing on attention on the space of indicator variables, named Mixed Kalman Filter.

MKF model setup Example 1: A special CDLM (Linear system with non-Gaussian errors) as the follow: In the CDLM system MKF is more sophisticated, outperform other methods (i.e.. bootstrap filter). Use a mixture of Gaussian distribution to estimate the target posterior distribution.

MKF model setup The method of MKF Use a weighted sample of the indicators: To represent the distribution p(Λt|yt)  a random mixture of Gaussian distribution: Can approximate the target distribution p(xt|yt).

MKF model setup Algorithm (updating weights)---If you are interested in:

Extended MKF with examples Beside the situation of CDLM, there is a extended one called partial conditional dynamic linear models (PCDLM). PCDLM are interested in non-linear component of state variables. No absolute distinction between CDLM and PCDLM.

Extended MKF with examples Example: Fading channel modeling system ( mobile communication channel can be modeled as Rayleigh flat fading channels )

Extended MKF with examples Example: Blind deconvolution digital communication system Where St is a discrete process taking vales on a known set S. It is to be estimated from the observed signals {y1,…,yt}, without knowing the channel coefficients θi. There two examples can be solved by extended MKF called EMKF.

Extended MKF with examples Why EMKF? It can deal with as many linear and Gaussian components from systems as possible. P(xt1,xt2|yt)=P(xt1|xt2,y)*P(xt2|yt), Monte Carlo approximation of P(xt2|yt) and an Gaussian conditional distribution p(xt1|xt2,y). Need to generate discrete samples in the joint space of the indicators and the non-linear state components.

Extended MKF with examples Algorithm (updating weights)

Application of MKF-Target tracking Situation setup: The tracking errors (differences between the estimated and true target location) are generated and compared with other methods.

Application of MKF-Target tracking The result proves the advantage of MKF:

Application of MKF-2 D target’s position A 2-D target’s position is sampled every T=10s. We know the movement and velocities on both x and y directions. Use MKF to simulate the results and compare them with the actual data. Model setup:

Application of MKF-2 D target’s position The result proves the advantage of MKF: Better than the tradition way done by Bar-Shalom & Fortmann

Other applications of MKF There are still several other applications with brief introduction can be found on the paper. Random (non-Gaussian) accelerated target (no clutter). Digital signal extraction in fading channels. They are both improved under MKF approach comparing with traditional Monte Carlo approach.

Conclusions MKF can perform real time estimation and prediction in CDLM situation, which outperform Sequential Monte Carlo approaches. Similar to EMKF in PCDLM situation. MKF can combine with other Monte Carlo techniques (Markov chain Monte Carlo updates, delayed estimation, fixed lag filter, etc.) to improve effectiveness. Sequential Monte Carlo method can be a platform for designing efficient non-linear filtering algorithms.

Questions & Answers