Download presentation
Presentation is loading. Please wait.
Published byShana Lynch Modified over 8 years ago
1
Computational Sensing = Modeling + Optimization CENS seminar Jan 28, 2005 Miodrag Potkonjak miodrag@cs.ucla.edu Key Contributors: Bradley Bennet, Alberto Cerpa, Jessica Feng, FarinazKoushanfar, Sasa Slijepcevic, Jennifer L. Wong
2
Goals Why Modeling? Why Non-parametric Statistical Modeling? Beyond Non-parametric Statistical Modeling? Do We Really Need Models? Tricks For Fame and Fun Applications: Calibration
3
Why Modeling: No OF, No Results L 1 : 1.272m L 2 : 5.737m L : 8.365m Gaussian: 0.928m Stat. Error Model: 1.662x10 -3 m Location discovery
4
Why Modeling: What to Optimize? Packet size
5
Why Modeling: What to Optimize? Receiver/Transmitter quality
6
Why Modeling:Localized vs. Centralized Reception rate predictability
7
Why Modeling: Optimization Mechanism One unknown node Two unknown nodes Atomic localization
8
Why Modeling: What paradigm to use? Maximum Likelihood: Distance measurements correlation
9
Why Modeling: Protocol Design Lagged autocorrelation
10
Why Modeling: Executive Summary Objective Function and Constraints: What to Optimize? - Consistency Problem Identification Formulation - Variability Localized vs. Centralized - Time variability Optimization Mechanism - Topology of Solution space: Monotonicity, Convexity Optimization Paradigm - Correlations Design of Protocols - High Impact Features First
11
How to Model? Most likely value: regression Probability of given value of target variable for predictor variable Validation Evaluation Parametric and Non-parametric Exploratory and Confirmatory
12
Model Construction: Samples of Techniques Independent of Distance (ID) Normalized Distance (ND) Kernel Smoothing (KS) Recursive Linear Regression (LR) Data Partitioning (DP)
13
Independent of Distance (ID)
14
Normalized Distance (ND)
15
Kernel Smoothing (KS)
16
Recursive Linear Regression (LR)
17
Data Partitioning (DP)
18
Statistical Evaluation of Models
19
Statistical Evaluation of OFs
20
Location Discovery: Experimental Results
21
Location Discovery: Performance Comparison ROBUST – D. Niculescu and B. Nath. Ad Hoc Positioning System (APS). GLOBECOM. 2001. N-HOP – A. Savvides, C. Han, M.B. Strivastava. Dynamic Fine-Grained Localization in Ad-Hoc Networks of Sensors. MOBICOM. pages 166-179. 2001. APS – C. Savarese, K. Langendoen and J. Rabaey. Robust Positioning Algorithms for Distributed Ad-Hoc Wireless Sensor Networks. WSNA. pages 112-121. 2002. K. Langendoen and N. Reijers. Distributed Localization in Wireless Sensor Networks: A Quantitative Comparison. Tech. Rep. PDS-2002-003. Technical University, Delft. 2002.
22
Combinatorial Isotonic Regression: CIR Statistical models using combinatorics Hidden covariate problem Univariate CIR – Problem Formulation: –Given data (x i, y i, i ), i=1,…,K –Given an error measure p and x 1 <x 2 <x 3 <…<x K – p isotonic regression is set (x i, ŷ i ), i=1,…,K, s.t. –Objective Function: Min p (x i, ŷ i, i ) –Constraint: ŷ 1 <ŷ 2 <ŷ 3 <…<ŷ K
23
p (x i, ŷ j ) = k | ŷ j – ŷ k |*h ik Univariate CIR Approach Histogram build error matrix E, e ij = p (x i, ŷ j ) 12125 31456 25496 16812 96321 Histogram 4653483428 3237301918 2423201420 19182734 1827324252 Error Matrix 46 32 24 20 18 XX YY 20
24
43 CE(x i, y j ) = E(x i,y j ) + min k j E(x i-1,y k ) Univariate CIR Approach Histogram build error matrix E, e ij = p (x i, ŷ j ) Build the cumulative error matrix CE Error Matrix 4653483428 3237301918 2423201420 19182734 1827324252 Cumulative Error 4673879199 3257697689 2443597191 20395784118 184981123175 X YY X 46 32 24 20 18
25
Univariate CIR Approach Histogram build error matrix E, e ij = p (x i, ŷ j ) Build the cumulative error matrix CE Map the problem to a graph combinatorial! Cumulative Error 4673879199 3257697689 2443597191 20395784118 184981123175 X Y Error Graph
26
Multivariate CIR Approach - ILP Given a response variable Y, and two explanatory X1, X2 3D error matrix E Objective Function: If ŷ k is the predicted value for X 1 =x 1 and X 2 =x 2 Otherwise Constraints: –C1: one Ŷ for X1=x1, X2=x2 –C2: Ordering constraint:
27
CIR Prediction Error on Temperature Sensors at Intel Berkeley Prediction error over all nodes: Limiting number of parameters - AIC criteria
28
Combinatorial Regression: Flavors Minimal and Maximal Slope Number of Segments Convexity Unimodularity Locally Monotonic Symmetry y = f(x), x = g(y) x = g(f(x)) Transitivity y = f(x), z = g(y), z = h(x) h(x) = g(f(x))
29
Combinatorial Regression: Symmetry
30
Time Dependant Models
32
Do We Really Need Models?
33
Modeling Without Modeling: Consistency x 1 x 2 f(x 1 ) > f(x 2 )
34
On-line Model Construction
35
Statistics for Sensor Networks: Executive Summary Large Scale Time Dependent Modelling Hidden Covariates: Monotonicity, Convexity,... Go to Discrete and Graph Domains Interaction: Data Collection - Modeling Properties of Networks Simulators
36
Tricks – Modeling and Sensor Fusion Hide Nodes Split Nodes Weight Nodes Additional Dimensions Additional Sources
37
Hiding Beacons
38
Splitting Nodes
39
Modeling Networks for Fame & Fun
40
Perfect Neighbors
41
Applications Calibration Location Discovery Data Integrity Sensor Network Compression Sensor Network Management Low Power Wireless Ad-hoc Network: Lossy Links
42
Calibration Minimal Maximal Error Minimal Average Error: median Minimal L 2 Error: average Most Likely Value Error PDF LL ML
43
Calibration Model for Light
44
Interval of Confidence
45
Summary: Recipe for SN Research Collect Data Model Data Understand Data...
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.