Some Results on Source Localization Soura Dasgupta, U of Iowa With: Baris Fidan, Brian Anderson, Shree Divya Chitte and Zhi Ding.

Slides:



Advertisements
Similar presentations
Cooperative Transmit Power Estimation under Wireless Fading Murtaza Zafer (IBM US), Bongjun Ko (IBM US), Ivan W. Ho (Imperial College, UK) and Chatschik.
Advertisements

Tests of Static Asset Pricing Models
Mobile Robot Localization and Mapping using the Kalman Filter
Distributed Selection of References for Localization in Wireless Sensor Networks Dominik Lieckfeldt, Jiaxi You, Dirk Timmermann Institute of Applied Microelectronics.
Using Cramer-Rao-Lower-Bound to Reduce Complexity of Localization in Wireless Sensor Networks Dominik Lieckfeldt, Dirk Timmermann Department of Computer.
1 ECE 776 Project Information-theoretic Approaches for Sensor Selection and Placement in Sensor Networks for Target Localization and Tracking Renita Machado.
Brief introduction on Logistic Regression
Computer Science Dr. Peng NingCSC 774 Adv. Net. Security1 CSC 774 Advanced Network Security Topic 7.3 Secure and Resilient Location Discovery in Wireless.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Computer Networks Group Universität Paderborn Ad hoc and Sensor Networks Chapter 9: Localization & positioning Holger Karl.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Nonlinear Regression Ecole Nationale Vétérinaire de Toulouse Didier Concordet ECVPT Workshop April 2011 Can be downloaded at
Los Angeles September 27, 2006 MOBICOM Localization in Sparse Networks using Sweeps D. K. Goldenberg P. Bihler M. Cao J. Fang B. D. O. Anderson.
Visual Recognition Tutorial
Circumnavigation From Distance Measurements Under Slow Drift Soura Dasgupta, U of Iowa With: Iman Shames, Baris Fidan, Brian Anderson.
Location, Localization, and Localizability Liu Y, Yang Z, Wang X et al. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, Mar Slides prepared by Lanchao.
A Beacon-Less Location Discovery Scheme for Wireless Sensor Networks Lei Fang (Syracuse) Wenliang (Kevin) Du (Syracuse) Peng Ning (North Carolina State)
Maximum likelihood (ML) and likelihood ratio (LR) test
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Minimaxity & Admissibility Presenting: Slava Chernoi Lehman and Casella, chapter 5 sections 1-2,7.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Location Estimation in Sensor Networks Moshe Mishali.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Wireless Sensor Networks 17th Lecture Christian Schindelhauer.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Jana van Greunen - 228a1 Analysis of Localization Algorithms for Sensor Networks Jana van Greunen.
Exposure In Wireless Ad-Hoc Sensor Networks S. Megerian, F. Koushanfar, G. Qu, G. Veltri, M. Potkonjak ACM SIG MOBILE 2001 (Mobicom) Journal version: S.
Maximum likelihood (ML)
1/23 A COMBINED APPROACH FOR NLOS MITIGATION IN CELLULAR POSITIONING WITH TOA MEASUREMENTS Fernaz Alimoğlu M. Bora Zeytinci.
1 Mohammed M. Olama Seddik M. Djouadi ECE Department/University of Tennessee Ioannis G. PapageorgiouCharalambos D. Charalambous Ioannis G. Papageorgiou.
Localization With Mobile Anchor Points in Wireless Sensor Networks
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Adaptive CSMA under the SINR Model: Fast convergence using the Bethe Approximation Krishna Jagannathan IIT Madras (Joint work with) Peruru Subrahmanya.
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Young Ki Baik, Computer Vision Lab.
Multi-hop-based Monte Carlo Localization for Mobile Sensor Networks
1 Mobile-Assisted Localization in Wireless Sensor Networks Nissanka B.Priyantha, Hari Balakrishnan, Eric D. Demaine, Seth Teller IEEE INFOCOM 2005 March.
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
Localization and Secure Localization. The Problem The determination of the geographical locations of sensor nodes Why do we need Localization? –Manual.
A new Ad Hoc Positioning System 컴퓨터 공학과 오영준.
Probabilistic Coverage in Wireless Sensor Networks Authors : Nadeem Ahmed, Salil S. Kanhere, Sanjay Jha Presenter : Hyeon, Seung-Il.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
0 IEEE SECON 2004 Estimation Bounds for Localization October 7 th, 2004 Cheng Chang EECS Dept,UC Berkeley Joint work with Prof.
11/25/2015 Wireless Sensor Networks COE 499 Localization Tarek Sheltami KFUPM CCSE COE 1.
Localization and Secure Localization. Learning Objectives Understand why WSNs need localization protocols Understand localization protocols in WSNs Understand.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
BCS547 Neural Decoding.
A Semi-Blind Technique for MIMO Channel Matrix Estimation Aditya Jagannatham and Bhaskar D. Rao The proposed algorithm performs well compared to its training.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Insight: Steal from Existing Supervised Learning Methods! Training = {X,Y} Error = target output – actual output.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
G. Giorgetti, ACM MELT 2008 – San Francisco – September 19, 2008 Localization using Signal Strength: TO RANGE OR NOT TO RANGE? Gianni Giorgetti Sandeep.
Introduction to Estimation Theory: A Tutorial
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Presentation : “ Maximum Likelihood Estimation” Presented By : Jesu Kiran Spurgen Date :
Location-Aware Sensing in Smart Networks João Pedro Gomes, Pinar Oguz Ekim, João Xavier, Paulo Oliveira Institute for Systems and Robotics, LA Instituto.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Estimator Properties and Linear Least Squares
12. Principles of Parameter Estimation
Privacy and Fault-Tolerance in Distributed Optimization Nitin Vaidya University of Illinois at Urbana-Champaign.
Chapter 2 Minimum Variance Unbiased estimation
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Solving an estimation problem
12. Principles of Parameter Estimation
Presentation transcript:

Some Results on Source Localization Soura Dasgupta, U of Iowa With: Baris Fidan, Brian Anderson, Shree Divya Chitte and Zhi Ding

NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS

NICTA/ANU August 8, 2008 What is Localization? Source Localization: Sensors with known position Source at unknown location Sensors must estimate source location Sensor Localization: Anchors with known position Sensor at unknown location Sensor must estimate its location Need some relative position information

NICTA/ANU August 8, 2008 Why localize? To process a signal in a sensor network sensors must locate the signal source –Bioterrorism Pervasive Computing –Locating printers/computers A sensor in a sensor network must know its own position for –Routing –Rescue –Target tracking –Enhanced network coverage

NICTA/ANU August 8, 2008 Wireless Localization Emerging multibillion dollar market E911 Mobile advertising Asset tracking for advanced public safety Fleet management: taxi, emergency vehicles Location based access authority for network security Location specific billing

NICTA/ANU August 8, 2008 Some Existing Technology Manual configuration –Infeasible in large scale sensor networks –Nodes move frequently GPS –LOS problems –Expensive hardware and power –Ineffective for indoor problems

NICTA/ANU August 8, 2008 What Information? Bearing Power level TDOA Distance

NICTA/ANU August 8, 2008 How to measure distance? Many methods One example Emit a signal Wait for reflection to return Second example Source emits a signal Signal strength=A/d c –A=signal strength at d=1 –d=distance –c a constant –Received Signal Strength (RSS)

NICTA/ANU August 8, 2008 How to localize from distances? One distance –Circle Two distances –Flip ambiguity Three distances –Specified –Unless collinear In 3-d –Need 4 –Noncoplanar

NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues  Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS

NICTA/ANU August 8, 2008 Issues Sensor/Anchor Placement Fast efficient localization Achieve large geographical coverage Past work –Distance from an anchor available if within range –Place anchors in a way that sufficient number of distances available to each sensor Enough to have the right number of distances? With Linear algorithms yes –Linear algorithms have problems

NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm  –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS

NICTA/ANU August 8, 2008 Linear Algorithm Three anchors in 2-d (x i,y i ) Sensor at (x,y) (x-x 1 ) 2 + (y-y 1 ) 2 = d 1 2 (x-x 2 ) 2 + (y-y 2 ) 2 = d 2 2 (x-x 3 ) 2 + (y-y 3 ) 2 = d (x 1 - x 2 )x+2 (y 1 - y 2 )y= d d 1 2 +x 1 2 -x (x 1 – x 3 )x+2 (y 1 – y 3 )y= d d 1 2 +x 1 2 -x 3 2 Gives (x,y)

NICTA/ANU August 8, 2008 Linear Algorithm Three anchors in 2-d (x i,y i ) Sensor at (x,y) (x-x 1 ) 2 + (y-y 1 ) 2 = d 1 2 +n (x-x 2 ) 2 + (y-y 2 ) 2 = d 2 2 +n (x-x 3 ) 2 + (y-y 3 ) 2 = d 3 2 +n 2 (x 1 - x 2 )x+2 (y 1 - y 2 )y= d d 1 2 +x 1 2 -x (x 1 – x 3 )x+2 (y 1 – y 3 )y= d d 1 2 +x 1 2 -x 3 2 Can have noise problems

NICTA/ANU August 8, 2008 Example of bust Three anchors (0,0), (43,7) and (47,0) Sensor at ( , ) True distances: , , Measured distances: 35,42,43 Linear estimate: ( , )

NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal  New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS

NICTA/ANU August 8, 2008 Goal Need nonlinear algorithms –Hero et. al., Nowak et. al., Rydstrom et. al. Minimum of nonconvex cost functions –Cannot guarantee fast convergence Propose new algorithm Characterize geographical regions around small numbers of sensors/anchors –If source/sensor lies in these regions –Guaranteed exponential convergence –Gradient descent Practical Convergence

NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm  –Characterize conditions for convergence Estimating Distance from RSS

NICTA/ANU August 8, 2008 New algorithm Notation: x i are vectors containing anchor coordinates y * vectors containing sensor coordinates d i distance between x i and y * : ||x i -y * || Find y to minimize weighted cost:

NICTA/ANU August 8, 2008 Good News Three anchors (0,0), (43,7) and (47,0) Sensor at ( , ) True distances: , , Measured distances: 35,42,43 Minimizing estimate: ( , )

NICTA/ANU August 8, 2008 Bad News May have local minima

NICTA/ANU August 8, 2008 Example y * =0 False minimum at y=[3,3] T.

NICTA/ANU August 8, 2008 Level Surface

NICTA/ANU August 8, 2008 Goal Anchor/ Sensor placement How to distribute anchors to achieve large geographical coverage Problem 1: Given x i, i find S 1 so that if y * is in S 1, J can be minimized easily. Problem 2: Given x i, find S 2 so that for every y * in S 2, one can find i for which J can be minimized easily.

NICTA/ANU August 8, 2008 When is convergence easy Gradient descent minimization globally convergent.

NICTA/ANU August 8, 2008 Necessary and sufficient condition The gradient below is zero iff y=y *. Unique stationary point..

NICTA/ANU August 8, 2008 Refined Goal Problem 1: Given x i, i find S 1 so that if y * is in S 1, J has a unique stationary point. Problem 2: Given x i, find S 2 so that for every y * in S 2, one can find i for which J has a unique stationary point. In fact UTC gradient descent minimization is exponentially convergent.

NICTA/ANU August 8, 2008 A Preliminary Setup/Problem 1 Assume: x i not collinear in 2-d x i not coplanar in 3-d Then there exist  i such that

NICTA/ANU August 8, 2008 Observation If  i nonnegative then y * in convex hull of x i.

NICTA/ANU August 8, 2008 A Sufficient condition Unique stationary point if P+P T >0. P=diag{  ...., n }(I-[1,…1] T [   ....,  n ])

NICTA/ANU August 8, 2008 Only sufficient condition Actually with E(y) dependent on y Gradient is –E(y)PE T (y)(y-y * ) Substantial slop But provides nontrivial regions for guaranteed exponential convergence –Robustness to noise –Convergence in distribution with noise.

NICTA/ANU August 8, 2008 A more precise condition No false stationary points exist if:

NICTA/ANU August 8, 2008 A special case No false stationary points exist if n<9, y * is in convex hull of the anchors and i =1

NICTA/ANU August 8, 2008 Implications For small n, much larger than convex hull Recall in 2-d need n>2 and in 3-d, n>3 Can achieve wide coverage with small number of anchors Condition in terms of  i. Need direct characterization in terms of y * This set is an ellipse, determined from the x i. –Using a simple matrix inversion –3x3 or 4x4

NICTA/ANU August 8, 2008 Problem 2 Changing i always moves/removes false stationary points unless one anchor has the same distance as the sensor from all other agents.

NICTA/ANU August 8, 2008 Problem 2 Given x i, find S 2 so that for every y * in S 2, one can find i for which J has a unique stationary point. Can find such a i if the following holds. And i =|  i | guarantees unique stationary point

NICTA/ANU August 8, 2008 Implication If y * is in the convex hull of anchors then  i nonnegative Condition always holds

NICTA/ANU August 8, 2008 Further Implication And i =|  i | guarantees unique stationary point –Not the only choice If y * is close to x i,  i greater  Larger i. –Accords with intuition

NICTA/ANU August 8, 2008 Direct characterization Given x i, find S 2 so that for every y * in S 2, one can find i for which J has a unique stationary point. S 2 is a polygon containing the convex hull –Obtained by solving a linear program Interior has many i and substantial regions where the same i satisfy requirement

NICTA/ANU August 8, 2008 Simulation Particulars Five anchors: – [1, 0, 0] T, [0, 2, 0] T, [-2, -1, 0] T, [0, 0, 2] T [0, 0, -1] T –Example 1: y * =0. In convex hull –Example 2: y * =[-1,1,1] T. Not in convex hull

NICTA/ANU August 8, 2008 Simulation 1

NICTA/ANU August 8, 2008 Simulation 2

NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS 

Log Normal Shadowing RSS: s=A/d c –A, c known –s measured In far field: –ln s = ln A –c ln d + w –w~N(0,   ) –Estimate for some m, d m Reformulation –a=c/m, z=(A/s) a p=d m –ln z= ln p – a w – z= e –aw p NICTA/ANU August 8, 2008

Efficient Estimation Cramer Rao Lower Bound (CRLB) –Best achievable error variance among unbiased estimators –Unbiased: Mean of estimate=Parameter Efficient Estimator is unbiased and meets CRLB CRLB=p 2    a 2 Does an efficient estimator exist? NO ln z= ln p – a w Affine in Gaussian noise, nonaffine in p NICTA/ANU August 8, 2008

Maximum Likelihood Estimator ln z= ln p – a w w~N(0,   ) f(z|p)=exp(-(lnz –lnp) 2 /2   a 2 )/((2  ) 1/2  a) p ML = z Bias: –E[z]- p = ( exp(    a 2 ) -1)p Error Variance: –(exp(2    a 2 )- exp(    a 2 ) +1)p 2 Both grow exponentially with variance NICTA/ANU August 8, 2008

Unbiased Estimator z= e –aw p, w~N(0,   ) Given z, a and  find g(z, a,  ) such that for all p: E[g(z, a,  )]=p Unique unbiased estimator –p U = exp(-    a 2 ) z –Linear in z Error Variance: (exp(    a 2 ) -1)p 2 c.f. (exp(2    a 2 )- exp(    a 2 ) +1)p 2 Better but grows exponentially with variance NICTA/ANU August 8, 2008

Another estimate Unique unbiased estimator linear in z Find linear estimator that has the smallest error variance p V = exp(-3    a 2 /2) z Bias: – ( exp(-    a 2 ) -1)p Error Variance: –(1- exp(-    a 2 ))p 2 Both bounded in the variance cf CRLB=p 2    a 2 MMSE? NICTA/ANU August 8, 2008

Conclusion Localization from distances Showed Linear algorithms are bad Proposed new cost function Characterized conditions for exponential convergence Implications to anchor/sensor deployment Practical convergence Estimating distance form RSS under lognormal shadowing Unbiased and ML estimation  Large error variance New estimate  Error variance and bias bounded in variance