Download presentation
Presentation is loading. Please wait.
1
Some Results on Source Localization Soura Dasgupta, U of Iowa With: Baris Fidan, Brian Anderson, Shree Divya Chitte and Zhi Ding
2
NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS
3
NICTA/ANU August 8, 2008 What is Localization? Source Localization: Sensors with known position Source at unknown location Sensors must estimate source location Sensor Localization: Anchors with known position Sensor at unknown location Sensor must estimate its location Need some relative position information
4
NICTA/ANU August 8, 2008 Why localize? To process a signal in a sensor network sensors must locate the signal source –Bioterrorism Pervasive Computing –Locating printers/computers A sensor in a sensor network must know its own position for –Routing –Rescue –Target tracking –Enhanced network coverage
5
NICTA/ANU August 8, 2008 Wireless Localization Emerging multibillion dollar market E911 Mobile advertising Asset tracking for advanced public safety Fleet management: taxi, emergency vehicles Location based access authority for network security Location specific billing
6
NICTA/ANU August 8, 2008 Some Existing Technology Manual configuration –Infeasible in large scale sensor networks –Nodes move frequently GPS –LOS problems –Expensive hardware and power –Ineffective for indoor problems
7
NICTA/ANU August 8, 2008 What Information? Bearing Power level TDOA Distance
8
NICTA/ANU August 8, 2008 How to measure distance? Many methods One example Emit a signal Wait for reflection to return Second example Source emits a signal Signal strength=A/d c –A=signal strength at d=1 –d=distance –c a constant –Received Signal Strength (RSS)
9
NICTA/ANU August 8, 2008 How to localize from distances? One distance –Circle Two distances –Flip ambiguity Three distances –Specified –Unless collinear In 3-d –Need 4 –Noncoplanar
10
NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS
11
NICTA/ANU August 8, 2008 Issues Sensor/Anchor Placement Fast efficient localization Achieve large geographical coverage Past work –Distance from an anchor available if within range –Place anchors in a way that sufficient number of distances available to each sensor Enough to have the right number of distances? With Linear algorithms yes –Linear algorithms have problems
12
NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS
13
NICTA/ANU August 8, 2008 Linear Algorithm Three anchors in 2-d (x i,y i ) Sensor at (x,y) (x-x 1 ) 2 + (y-y 1 ) 2 = d 1 2 (x-x 2 ) 2 + (y-y 2 ) 2 = d 2 2 (x-x 3 ) 2 + (y-y 3 ) 2 = d 3 2 2 (x 1 - x 2 )x+2 (y 1 - y 2 )y= d 2 2 - d 1 2 +x 1 2 -x 2 2 2 (x 1 – x 3 )x+2 (y 1 – y 3 )y= d 3 2 - d 1 2 +x 1 2 -x 3 2 Gives (x,y)
14
NICTA/ANU August 8, 2008 Linear Algorithm Three anchors in 2-d (x i,y i ) Sensor at (x,y) (x-x 1 ) 2 + (y-y 1 ) 2 = d 1 2 +n (x-x 2 ) 2 + (y-y 2 ) 2 = d 2 2 +n (x-x 3 ) 2 + (y-y 3 ) 2 = d 3 2 +n 2 (x 1 - x 2 )x+2 (y 1 - y 2 )y= d 2 2 - d 1 2 +x 1 2 -x 2 2 2 (x 1 – x 3 )x+2 (y 1 – y 3 )y= d 3 2 - d 1 2 +x 1 2 -x 3 2 Can have noise problems
15
NICTA/ANU August 8, 2008 Example of bust Three anchors (0,0), (43,7) and (47,0) Sensor at (17.9719,-29.3227) True distances: 34.392, 44.1106, 41.2608 Measured distances: 35,42,43 Linear estimate: (16.8617,-6.5076)
16
NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS
17
NICTA/ANU August 8, 2008 Goal Need nonlinear algorithms –Hero et. al., Nowak et. al., Rydstrom et. al. Minimum of nonconvex cost functions –Cannot guarantee fast convergence Propose new algorithm Characterize geographical regions around small numbers of sensors/anchors –If source/sensor lies in these regions –Guaranteed exponential convergence –Gradient descent Practical Convergence
18
NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS
19
NICTA/ANU August 8, 2008 New algorithm Notation: x i are vectors containing anchor coordinates y * vectors containing sensor coordinates d i distance between x i and y * : ||x i -y * || Find y to minimize weighted cost:
20
NICTA/ANU August 8, 2008 Good News Three anchors (0,0), (43,7) and (47,0) Sensor at (17.9719,-29.3227) True distances: 34.392, 44.1106, 41.2608 Measured distances: 35,42,43 Minimizing estimate: (18.2190,-29.2123)
21
NICTA/ANU August 8, 2008 Bad News May have local minima
22
NICTA/ANU August 8, 2008 Example y * =0 False minimum at y=[3,3] T.
23
NICTA/ANU August 8, 2008 Level Surface
24
NICTA/ANU August 8, 2008 Goal Anchor/ Sensor placement How to distribute anchors to achieve large geographical coverage Problem 1: Given x i, i find S 1 so that if y * is in S 1, J can be minimized easily. Problem 2: Given x i, find S 2 so that for every y * in S 2, one can find i for which J can be minimized easily.
25
NICTA/ANU August 8, 2008 When is convergence easy Gradient descent minimization globally convergent.
26
NICTA/ANU August 8, 2008 Necessary and sufficient condition The gradient below is zero iff y=y *. Unique stationary point..
27
NICTA/ANU August 8, 2008 Refined Goal Problem 1: Given x i, i find S 1 so that if y * is in S 1, J has a unique stationary point. Problem 2: Given x i, find S 2 so that for every y * in S 2, one can find i for which J has a unique stationary point. In fact UTC gradient descent minimization is exponentially convergent.
28
NICTA/ANU August 8, 2008 A Preliminary Setup/Problem 1 Assume: x i not collinear in 2-d x i not coplanar in 3-d Then there exist i such that
29
NICTA/ANU August 8, 2008 Observation If i nonnegative then y * in convex hull of x i.
30
NICTA/ANU August 8, 2008 A Sufficient condition Unique stationary point if P+P T >0. P=diag{ ...., n }(I-[1,…1] T [ ...., n ])
31
NICTA/ANU August 8, 2008 Only sufficient condition Actually with E(y) dependent on y Gradient is –E(y)PE T (y)(y-y * ) Substantial slop But provides nontrivial regions for guaranteed exponential convergence –Robustness to noise –Convergence in distribution with noise.
32
NICTA/ANU August 8, 2008 A more precise condition No false stationary points exist if:
33
NICTA/ANU August 8, 2008 A special case No false stationary points exist if n<9, y * is in convex hull of the anchors and i =1
34
NICTA/ANU August 8, 2008 Implications For small n, much larger than convex hull Recall in 2-d need n>2 and in 3-d, n>3 Can achieve wide coverage with small number of anchors Condition in terms of i. Need direct characterization in terms of y * This set is an ellipse, determined from the x i. –Using a simple matrix inversion –3x3 or 4x4
35
NICTA/ANU August 8, 2008 Problem 2 Changing i always moves/removes false stationary points unless one anchor has the same distance as the sensor from all other agents.
36
NICTA/ANU August 8, 2008 Problem 2 Given x i, find S 2 so that for every y * in S 2, one can find i for which J has a unique stationary point. Can find such a i if the following holds. And i =| i | guarantees unique stationary point
37
NICTA/ANU August 8, 2008 Implication If y * is in the convex hull of anchors then i nonnegative Condition always holds
38
NICTA/ANU August 8, 2008 Further Implication And i =| i | guarantees unique stationary point –Not the only choice If y * is close to x i, i greater Larger i. –Accords with intuition
39
NICTA/ANU August 8, 2008 Direct characterization Given x i, find S 2 so that for every y * in S 2, one can find i for which J has a unique stationary point. S 2 is a polygon containing the convex hull –Obtained by solving a linear program Interior has many i and substantial regions where the same i satisfy requirement 1 2 3 6 5 98 7 4
40
NICTA/ANU August 8, 2008 Simulation Particulars Five anchors: – [1, 0, 0] T, [0, 2, 0] T, [-2, -1, 0] T, [0, 0, 2] T [0, 0, -1] T –Example 1: y * =0. In convex hull –Example 2: y * =[-1,1,1] T. Not in convex hull
41
NICTA/ANU August 8, 2008 Simulation 1
42
NICTA/ANU August 8, 2008 Simulation 2
43
NICTA/ANU August 8, 2008 Outline Localization –What –Why –How Issues Linear algorithm –Conceptually simple –Poor performance Goal New nonlinear algorithm –Characterize conditions for convergence Estimating Distance from RSS
44
Log Normal Shadowing RSS: s=A/d c –A, c known –s measured In far field: –ln s = ln A –c ln d + w –w~N(0, ) –Estimate for some m, d m Reformulation –a=c/m, z=(A/s) a p=d m –ln z= ln p – a w – z= e –aw p NICTA/ANU August 8, 2008
45
Efficient Estimation Cramer Rao Lower Bound (CRLB) –Best achievable error variance among unbiased estimators –Unbiased: Mean of estimate=Parameter Efficient Estimator is unbiased and meets CRLB CRLB=p 2 a 2 Does an efficient estimator exist? NO ln z= ln p – a w Affine in Gaussian noise, nonaffine in p NICTA/ANU August 8, 2008
46
Maximum Likelihood Estimator ln z= ln p – a w w~N(0, ) f(z|p)=exp(-(lnz –lnp) 2 /2 a 2 )/((2 ) 1/2 a) p ML = z Bias: –E[z]- p = ( exp( a 2 ) -1)p Error Variance: –(exp(2 a 2 )- exp( a 2 ) +1)p 2 Both grow exponentially with variance NICTA/ANU August 8, 2008
47
Unbiased Estimator z= e –aw p, w~N(0, ) Given z, a and find g(z, a, ) such that for all p: E[g(z, a, )]=p Unique unbiased estimator –p U = exp(- a 2 ) z –Linear in z Error Variance: (exp( a 2 ) -1)p 2 c.f. (exp(2 a 2 )- exp( a 2 ) +1)p 2 Better but grows exponentially with variance NICTA/ANU August 8, 2008
48
Another estimate Unique unbiased estimator linear in z Find linear estimator that has the smallest error variance p V = exp(-3 a 2 /2) z Bias: – ( exp(- a 2 ) -1)p Error Variance: –(1- exp(- a 2 ))p 2 Both bounded in the variance cf CRLB=p 2 a 2 MMSE? NICTA/ANU August 8, 2008
49
Conclusion Localization from distances Showed Linear algorithms are bad Proposed new cost function Characterized conditions for exponential convergence Implications to anchor/sensor deployment Practical convergence Estimating distance form RSS under lognormal shadowing Unbiased and ML estimation Large error variance New estimate Error variance and bias bounded in variance
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.