Bayesian Indoor Positioning Systems Presented by: Eiman Elnahrawy Joint work with: David Madigan, Richard P. Martin, Wen-Hua Ju, P. Krishnan, A.S. Krishnakumar.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
FM-BASED INDOOR LOCALIZATION TsungYun 1.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Dynamic Bayesian Networks (DBNs)
Using Probabilistic Methods for Localization in Wireless Networks Presented by Adam Kariv May, 2005.
Error Estimation for Indoor Location Fingerprinting.
The loss function, the normal equation,
CHAPTER 16 MARKOV CHAIN MONTE CARLO
BAYESIAN INFERENCE Sampling techniques
A Generalized Model for Financial Time Series Representation and Prediction Author: Depei Bao Presenter: Liao Shu Acknowledgement: Some figures in this.
Transfer Learning for WiFi-based Indoor Localization
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Probability Grid: A Location Estimation Scheme for Wireless Sensor Networks Presented by cychen Date : 3/7 In Secon (Sensor and Ad Hoc Communications and.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Localization Techniques in Wireless Networks Presented by: Rich Martin Joint work with: David Madigan, Wade Trappe, Y. Chen, E. Elnahrawy, J. Francisco,
Crash Course on Machine Learning
Particle Filtering in Network Tomography
1 Location Estimation in ZigBee Network Based on Fingerprinting Department of Computer Science and Information Engineering National Cheng Kung University,
The Limits of Localization Using Signal Strength: A Comparative Study Eiman Elnahrawy, Xiaoyan Li, and Richard Martin Dept. of Computer Science, Rutgers.
RADAR: An In-Building RF-based User Location and Tracking System Presented by: Michelle Torski Paramvir Bahl and Venkata N. Padmanabhan.
IID Samples In supervised learning, we usually assume that data points are sampled independently and from the same distribution IID assumption: data are.
Finding Scientific topics August , Topic Modeling 1.A document as a probabilistic mixture of topics. 2.A topic as a probability distribution.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
RADAR: An In-Building RF-based User Location and Tracking System.
1 Robust Statistical Methods for Securing Wireless Localization in Sensor Networks (IPSN ’05) Zang Li, Wade Trappe Yanyong Zhang, Badri Nath Rutgers University.
WINLAB Improving RF-Based Device-Free Passive Localization In Cluttered Indoor Environments Through Probabilistic Classification Methods Rutgers University.
A New Hybrid Wireless Sensor Network Localization System Ahmed A. Ahmed, Hongchi Shi, and Yi Shang Department of Computer Science University of Missouri-Columbia.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
RADAR: an In-building RF-based user location and tracking system
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Learning In Bayesian Networks. General Learning Problem Set of random variables X = {X 1, X 2, X 3, X 4, …} Training set D = { X (1), X (2), …, X (N)
Chapter1: Introduction Chapter2: Overview of Supervised Learning
Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
Bayesian networks and their application in circuit reliability estimation Erin Taylor.
Markov Chain Monte Carlo for LDA C. Andrieu, N. D. Freitas, and A. Doucet, An Introduction to MCMC for Machine Learning, R. M. Neal, Probabilistic.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Reducing MCMC Computational Cost With a Two Layered Bayesian Approach
1 1 Slide Simulation Professor Ahmadi. 2 2 Slide Simulation Chapter Outline n Computer Simulation n Simulation Modeling n Random Variables and Pseudo-Random.
Network/Computer Security Workshop, May 06 The Robustness of Localization Algorithms to Signal Strength Attacks A Comparative Study Yingying Chen, Konstantinos.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Investigating a Physically-Based Signal Power Model for Robust Low Power Wireless Link Simulation Tal Rusak, Philip Levis MSWIM 2008.
Machine Learning 5. Parametric Methods.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Accurate WiFi Packet Delivery Rate Estimation and Applications Owais Khan and Lili Qiu. The University of Texas at Austin 1 Infocom 2016, San Francisco.
Sensor-Assisted Wi-Fi Indoor Location System for Adapting to Environmental Dynamics Yi-Chao Chen, Ji-Rung Chiang, Hao-hua Chu, Polly Huang, and Arvin Wen.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
E. Elnahrawy, X. Li, and R. Martin Rutgers U.
Chapter 7. Classification and Prediction
MCMC Output & Metropolis-Hastings Algorithm Part I
RF-based positioning.
Constrained Hidden Markov Models for Population-based Haplotyping
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Localization in Wireless Networks
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
RADAR: An In-Building RF-based User Location and Tracking System
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Sofia Pediaditaki and Mahesh Marina University of Edinburgh
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

Bayesian Indoor Positioning Systems Presented by: Eiman Elnahrawy Joint work with: David Madigan, Richard P. Martin, Wen-Hua Ju, P. Krishnan, A.S. Krishnakumar Rutgers University and Avaya Labs Infocom 2005

Wireless Explosion Technology trends creating cheap wireless communication in every computing device Device radios offer “indoors” localization opportunity in 2,3D New capability compared to traditional communication networks Can we localize every device having radio using only this available infrastructure?? 3 technology communities: WLAN (802.11x) Sensor networks (zigbee) Cell carriers(3G)

Radio-based Localization Signal decays linearly with log distance in laboratory setting S j = b 0j + b 1j log D j D j = sqrt((x-x j ) 2 + (y-y j ) 2 ) Use triangulation to compute (x,y) » Problem solved Not in real life!! noise, multi-path, reflections, systematic errors, etc. [-80,-67,-50] Fingerprint or RSS (x?,y?)

Supervised Learning-based Systems Training Offline phase Collect “labeled” training data [(x,y), S1,S2,S3,..] Online phase Match “unlabeled” RSS [(?,?), S1,S2,S3,..] to existing “labeled” training fingerprints [-80,-67,-50] Fingerprint or RSS (x?,y?)

Previous Work People have tried almost all existing supervised learning approaches Well known Radar (NN) Probabilistic, e.g., Bayes a posteriori likelihood Support Vector Machines Multi-layer Perceptrons … [Bahl00, Battiti02, Roos02,Youssef03, Krishnan04,…] All have a major drawback Labeled training fingerprints “profiling” Labor intensive (286 in 32 hrs!!) May need to be repeated over the time

Contribution Why don’t we try Bayesian Graphical Models (BGM)? any performance gains? Performance-wise: comparable Minimum labeled fingerprints Adaptive Simultaneously locate a set of objects In fact it can be a zero-profiling approach No more “labeled” training data needed Unlabeled data can be obtained using existing data traffic

Loocle TM General purpose localization analogous to general purpose communication! Can we make finding objects in the physical space as easy as Google ?

Can this dream come true? We believe we are so close Only existing infrastructure Localize with only the radios on existing devices Additional resources as performance enhancements Ad-hoc No more labeled data (our contribution)

Outline Motivations and Goals Experimental setup Bayesian background Models of increasing complexities [labeled-unlabeled] Comparison to previous work Conclusion and Future Work

Experimental Setup 3 Office buildings BR, CA Up, CA Down b radio Different sessions, days All give similar performance Focus on BR (no particular reason) BR: 5 access points, 225 ft x 175 ft, 254 measurements

Bayesian Graphical Models Encode dependencies/conditional independence between variables X Y D S Vertices = random variables Edges = relationships Example [(X,Y), S], AP at (x b, y b ) Log-based signal strength propagation S = b 0 + b 1 log D D= sqrt((X-x b ) 2 + (Y-y b ) 2 )

Model 1 (Simple): labeled data Xi Yi D1D1 S1S1 b 11 b 01 b 15 D2D2 D3D3 D4D4 D5D5 S2S2 S3S3 S4S4 S5S5 b 12 b 13 b 14 b 02 b 03 b 04 b 05 Position Variables Distances Observed Signal Strengths Base Station Parameters “Unknowns" Xi ~ uniform(0, Length) Yi ~ uniform(0, Width) Si ~ N(b 0i +b 1i log(Di), δ i ), i=1,2,3,4,5 b0i ~ N(0,1000), i=1,2,3,4,5 b1i ~ N(0,1000), i=1,2,3,4,5

Input Labeled: training [(x1,y1),(-40,-55,-90,..)][(x2,y2),(-60,-56,-80,..)][(x3,y3),(-80,-70,-30,..)][(x4,y4),(-64,-33,-70,..)] Unlabeled: mobile object(s) [(?,?),(-45,-65,-40,..)][(?,?),(-35,-45,-78,..)][(?,?),(-75,-55,-65,..)] Probability distributions for all the unknown variables Propagation constants b0i, b1i for each Base Station (x,y) for each (?,?) Output

Great! but now how do we find the location of the mobile? Closed form solution doesn’t usually exist » simulation/analytic approx We used MCMC simulation (Markov Chain Monte Carlo) to generate predictive samples from the joint distribution for every unknown (x,y) location

Example Output

Some Results.. M1 is better Comparable 25th Med 75th Min Max

Model 2 (Hierarchical): labeled data Motivation borrowing strength might provide some predictive benefits  Xi Yi D1D1 D2D2 D3D3 D4D4 D5D5 S1S1 S2S2 S3S3 S4S4 S5S5 b 11 b 12 b 13 b 14 b 15 b 01 b 02 b 03 b 04 b 05 Xi ~ uniform(0,Length) Yi ~ uniform(0,Width) Si ~ N(b 0i +b 1i log(Di), δi) b0i ~ N(b0, δ b0 ) b1i ~ N(b1, δ b1 ) b0 ~ N(0,1000) b1 ~ N(0,1000) δ 0 ~ Gamma(0.001,0.001) δ 1 ~ Gamma(0.001,0.001) i=1,2,3,4,5 Allowing any values for b’s too unconstrained! Assume all base-stations parameters normally distributed around a hidden variable with a mean and variance

M2 behaves similar to M1, but provides improvement for very small training sets Hmm, still comparable to SmoothNN More Results.. Leave-one-out error (feet) Size of labeled data Average Error (feet) Labeled M1 Labeled Hier M2 SmoothNN

Can we get reasonable estimates of position without any labeled data? Yes! Observe signal strengths from existing data packets (unlabeled by default) No more running around collecting data.. Over and over.. and over..

Input Labeled: training [(x1,y1),(-40,-55,-90,..)][(x2,y2),(-60,-56,-80,..)][(x3,y3),(-80,-70,-30,..)][(x4,y4),(-64,-33,-70,..)] Unlabeled: mobile object(s) [(?,?),(-45,-65,-40,..)][(?,?),(-35,-45,-78,..)][(?,?),(-75,-55,-65,..)] Probability distributions for all the unknowns Propagation constants b0i, b1i for each Base Station (x,y) for each (?,?) Output

Model 3 (Zero Profiling) Same graph as M2 (Hierarchical) but with (unlabeled data)  Xi Yi D1D1 D2D2 D3D3 D4D4 D5D5 S1S1 S2S2 S3S3 S4S4 S5S5 b 11 b 12 b 13 b 14 b 15 b 01 b 02 b 03 b 04 b 05 Hmm, why would this work? [1] Prior knowledge about distance-signal strength [2] Prior knowledge that access points behave similarly

Results Close to SmoothNN, BUT…!! Leave-one-out error (feet) Size of input data Average Error in Feet Zero Profiling M3 SmoothNN LABELED UNLABELED

Other ideas Corridor Effects What? RSS is stronger along corridors Variable c =1 if the point shares x or y with the AP No improvements Informative Prior distributions S1S1 S2S2 S3S3 S4S4 S5S5 11 22 33 44 55  D1D1 D2D2 D3D3 D4D4 D5D5 Yi Xi C1C1 C2C2 C3C3 C4C4 C5C5

Comparison to previous work Distance in feet Probability Error CDF Across Algorithms RADAR Probabilistic M1-labeled M3-unlabeled M2-labeled More ad-hoc Adaptive No labor investment

Conclusion and Future Work First time to actually use BGM Considerable promise for localization Performance comparable to existing approaches Zero profiling! (can we localize anything with a radio??) Where are we going? Other models Variational approximations Tracking applications Minimum extra infrastructure (more information - better accuracy)!

Thanks! ☺