Download presentation
Presentation is loading. Please wait.
Published byClarence Woods Modified over 6 years ago
1
Cold-Start Heterogeneous-Device Wireless Localization
Vincent Zheng Advanced Digital Sciences Center, Singapore April 29, 2016 Joint work with Hong Cao, Shenghua Gao, Aditi Adhikari, Miao Lin and Kevin Chen-Chuan Chang
2
Signal-strength-based Localization
AP1 AP2 -30dBm -50dBm -70dBm AP3 Build localization model (Offline Training) Prediction location for a device (Online Testing) A classification function f: X Y as the localization model X = (AP1, AP2, AP3) Y = locations (-30, -50, -70) dBm Location y1 (-40, -45, -75) dBm Location y2 ……. … X = (AP1, AP2, AP3) Y (-33, -47, -73) dBm ? ……. …
3
Challenges Heterogeneous device Cold start
Traditional methods assume test data has the same distribution with the training data Use same (or, at least very similar) device in training-testing However, devices are heterogeneous, which hurts performance Cold start No calibration data on the new devices Same y, different x
4
Problem Formulation Transfer learning in an extreme setting with no training data in the target domain Training Surveyor Device (S) Target Device (T) X = (AP1, AP2, AP3) Y = locations (-30, -50, -70) dBm Location y1 (-40, -45, -75) dBm Location y2 ……. … X = (AP1, AP2, AP3) Y = locations Testing Robust feature learning Target Device (T) 1) Same y, same x across devices X = (AP1, AP2, AP3) Y = locations (-36, -58, -79) dBm ? (-34, -43, -65) dBm ……. … 2) Build localization model
5
Previous Work Methods Source domain labeled training data
Target domain labeled training data Target domain unlabeled training data LFT (Haeberlen et al. 2004) ✔ ✗ KDEFT (Park et al. 2011) LatentMTL (Zheng et al. 2008) ULFT (Tsui et al. 2009) KLIEP (Sugiyama et al. 2008) KMM (Huang et al. 2007) SKM (Zhang et al. 2013) HLF (Kjaergaard & Munk 2008) ECL (Yedavalli et al. 2005) Ours
6
Our Insights Robust features
Pairwise RSS comparison is robust (e.g., AP1 - AP2 > 0) Phenomenon can be explained by radio propagation theory But not discriminative (e.g., y1 vs. y2) High-order Pairwise (HOP) features are robust and discriminative (e.g., AP3 - AP1 > AP1 - AP2)
7
Formulating HOP Features
Formulation Pairwise comparison to encode relative AP-location closeness Multiple pairwise comparison in a linear combination Allow randomness in signal value variation An HOP feature corresponds to a pair of ({ck1,k2}, b) Eq.(4) can be rewritten as Numeric input x, binary output h, linear mapping from x to h
8
Learning HOP Features Robust Feature Constrained Restricted Boltzmann Machine Gaussian-Bernoulli RBM as the feature learning framework We extend it to incorporate the HOP constraint Finally, we combine feature learning and localization model training P(x(k)) is the likelihood of observation x(k) generated by a set of latent variables hj’s L1 is to find the robust HOP features, L2 is to ensure the HOP features are discriminative
9
Experimental Setup Public data sets Baselines Evaluation metric
2 environments 6 heterogeneous devices 4 pairs of S-T devices Baselines Ignoring heterogeneity SVM Transfer learning to handle heterogeneity HLF: heuristic pairwise AP value ratio (limited to 2nd-order) ECL: pairwise AP value ranking (still limited to 2nd-order) Evaluation metric Accuracy w.r.t. error distance A prediction is “correct” if it is within K meters of the ground truth location
10
Results Our model achieves 23.1%– 91.3% relative accuracy improvement than the best state-of-the-art baseline.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.