Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.

Slides:



Advertisements
Similar presentations
Bayesian inference Lee Harrison York Neuroimaging Centre 01 / 05 / 2009.
Advertisements

Mobile Robot Localization and Mapping using the Kalman Filter
Mixture Models and the EM Algorithm
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
IR Lab, 16th Oct 2007 Zeyn Saigol
Jose-Luis Blanco, Juan-Antonio Fernández-Madrigal, Javier González University of Málaga (Spain) Dpt. of System Engineering and Automation Sep Nice,
Segmentation and Fitting Using Probabilistic Methods
Jeroen Hermans, Frederik Maes, Dirk Vandermeulen, Paul Suetens
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 10 Statistical Modelling Martin Russell.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Unsupervised Training and Clustering Alexandros Potamianos Dept of ECE, Tech. Univ. of Crete Fall
Probabilistic Robotics
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
SA-1 Probabilistic Robotics Mapping with Known Poses.
Expectation-Maximization (EM) Chapter 3 (Duda et al.) – Section 3.9
Grid Maps for Robot Mapping. Features versus Volumetric Maps.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
SA-1 Probabilistic Robotics Bayes Filter Implementations Discrete filters.
A Bidirectional Matching Algorithm for Deformable Pattern Detection with Application to Handwritten Word Retrieval by K.W. Cheung, D.Y. Yeung, R.T. Chin.
Kalman filter and SLAM problem
Gaussian Mixture Model and the EM algorithm in Speech Recognition
Markov Localization & Bayes Filtering
Computer vision: models, learning and inference Chapter 19 Temporal models.
EM and expected complete log-likelihood Mixture of Experts
Soft Sensor for Faulty Measurements Detection and Reconstruction in Urban Traffic Department of Adaptive systems, Institute of Information Theory and Automation,
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Mobile Robot Localization (ch. 7)
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Map-Reduce for Machine Learning on Multicore C. Chu, S.K. Kim, Y. Lin, Y.Y. Yu, G. Bradski, A.Y. Ng, K. Olukotun (NIPS 2006) Shimin Chen Big Data Reading.
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
Lecture 2: Statistical learning primer for biologists
Flat clustering approaches
Gaussian Processes For Regression, Classification, and Prediction.
For multivariate data of a continuous nature, attention has focussed on the use of multivariate normal components because of their computational convenience.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Hierarchical Mixture of Experts Presented by Qi An Machine learning reading group Duke University 07/15/2005.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Flexible Speaker Adaptation using Maximum Likelihood Linear Regression Authors: C. J. Leggetter P. C. Woodland Presenter: 陳亮宇 Proc. ARPA Spoken Language.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
CS479/679 Pattern Recognition Dr. George Bebis
ICS 280 Learning in Graphical Models
Department of Civil and Environmental Engineering
Probabilistic Robotics
Special Topics In Scientific Computing
Unsupervised-learning Methods for Image Clustering
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
Estimating the Spatial Sensitivity Function of A Light Sensor N. K
'Linear Hierarchical Models'
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
SPM2: Modelling and Inference
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal
Clustering (2) & EM algorithm
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011

Outline Problem: –Simultaneous mapping and localisation in static, continuous and smooth field Solution –Expectation Maximisation (EM) Implementation –Grid-based representation of all PDFs –In simulation and practical 2

Contributions Claim: Use of smoothly varying parameters in the environment to aid localization Simultaneous mapping of continuous field (with uncertainty) and localisation of sensors. Interesting idea, but implementation does not fully take advantage of continuous field 3

Background: Expectation Maximisation Maximum likelihood estimator Two steps in each iteration –Expectation – compute likelihood of observations with current model –Maximisation – using likelihood of observations, maximise likelihood of model parameters Also used as Maximum a Posteriori estimator –How this paper uses EM –Maximisation step uses MAP rather than ML 4

Background: Expectation Maximisation Example: fitting Gaussian mixture models –Problem Inputs: set of data points, number of Gaussians in mixture Outputs: weights, means and covariances of each Gaussian Weights must sum to 1.0 –Expectation Compute likelihood of each point being in each Gaussian –Maximisation Update weights, means and covariances based on likelihoods using “frequentist” definition 5

Notation = sensor pose(s) –Grid representation of domain –Probability of occupancy represented as grid – = prior = model parameters –Grid representation of domain –Environmental parameter(s) represented by (multivariate) Gaussian at each cell – = estimate of model parameters = observations of environmental parameters –Vector of measurements of environmental parameter(s) 6

Approach Expectation: Maximisation 7

Algorithm 8

Results – WiFi RSSI 9

Results - Simulation 10

Discussion Considers static sensors –A motion model can be incorporated in Expectation step Grid representation of world –Continuous representation of world –Continuous representation of sensor network cost Communications cost 11

Conclusions EM framework for simultaneous localisation and environmental mapping (i.e. continuous field) Interesting idea, but implementation does not fully take advantage of continuous field 12