Presentation is loading. Please wait.

Presentation is loading. Please wait.

Near-Optimal Sensor Placements in Gaussian Processes

Similar presentations


Presentation on theme: "Near-Optimal Sensor Placements in Gaussian Processes"— Presentation transcript:

1 Near-Optimal Sensor Placements in Gaussian Processes
Carlos Guestrin Andreas Krause Ajit Singh Carnegie Mellon University

2 Sensor placement applications
Monitoring of spatial phenomena Temperature Precipitation Drilling oil wells ... Active learning, experimental design, ... Results today not limited to 2-dimensions Temperature data from sensor network Precipitation data from Pacific NW

3 Deploying sensors Computer science Spatial statistics Considered in:
This deployment: Evenly distributed sensors Considered in: Computer science (c.f., [Hochbaum & Maass ’85]) Spatial statistics (c.f., [Cressie ’91]) Chicken-and-Egg problem:  assumptions No data or assumptions about distribution But, what are the optimal placements??? i.e., solving combinatorial (non-myopic) optimization Don’t know where to place sensors

4 Strong assumption – Sensing radius
Becomes a covering problem Problem is NP-complete But there are good algorithms with (PTAS) -approximation guarantees [Hochbaum & Maass ’85] Node predicts values of positions with some radius Unfortunately, approach is usually not useful… Assumption is wrong on real data!  For example…

5 Spatial correlation Precipitation data from Pacific NW
Non-local, Non-circular correlations Complex positive and negative correlations

6 Complex, noisy correlations
Complex, uneven sensing “region” Actually, noisy correlations, rather than sensing region

7 Combining multiple sources of information
Temp here? Individually, sensors are bad predictors Combined information is more reliable How do we combine information? Focus of spatial statistics

8 Gaussian process (GP) - Intuition
GP – Non-parametric; represents uncertainty; complex correlation functions (kernels) Uncertainty after observations are made less sure here more sure here y - temperature x - position

9 Prediction after observing
Gaussian processes Posterior mean temperature Posterior variance Kernel function: Prediction after observing set of sensors A:

10 Gaussian processes for sensor placement
Posterior mean temperature Posterior variance Goal: Find sensor placement with least uncertainty after observations Problem is still NP-complete  Need approximation

11 Non-myopic placements
Consider myopically selecting This can be seen as an attempt to non-myopically maximize H(A1) + H(A2 | {A1}) H(Ak | {A1 ... Ak-1}) most uncertain This is exactly the joint entropy H(A) = H({A1 ... Ak}) most uncertain given A1 most uncertain given A1 ... Ak-1 swap order with previous

12 Entropy criterion (c.f., [Cressie ’91])
A Ã ; For i = 1 to k Add location Xi to A, s.t.: Entropy places sensors along borders Temperature data placements: Entropy “Wasted” information observed by [O’Hagan ’78] Entropy High uncertainty given current set A – X is different Uncertainty (entropy) plot Entropy criterion wastes information [O’Hagan ’78], Indirect, doesn’t consider sensing region – No formal non-myopic guarantees 

13 Proposed objective function: Mutual information
Locations of interest V Find locations AµV maximizing mutual information: Intuitive greedy rule: Temperature data placements: Entropy Mutual information Uncertainty of uninstrumented locations before sensing Uncertainty of uninstrumented locations after sensing Intuitive criterion – Locations that are both different and informative We give formal non-myopic guarantees  High uncertainty given A – X is different Low uncertainty given rest – X is informative

14 An important observation
Selecting T1 tells sth. about T2 and T5 Selecting T3 tells sth. about T2 and T4 In many cases, new information is worth less if we know more (diminishing returns)! T2 T1 T3 T5 T4 Now adding T2 would not help much

15 Submodular set functions
Submodular set functions are a natural formalism for this idea: f(A [ {X}) – f(A) Maximization of SFs is NP-hard  But… ¸ f(B [ {X}) – f(B) for A µ B B A {X}

16 How can we leverage submodularity?
Theorem [Nemhauser et al. ’78]: The greedy algorithm guarantees (1-1/e) OPT approximation for monotone SFs, i.e. ~ 63%

17 How can we leverage submodularity?
Theorem [Nemhauser et al. ’78]: The greedy algorithm guarantees (1-1/e) OPT approximation for monotone SFs, i.e. ~ 63%

18 Mutual information and submodularity
Even though MI is submodular, can’t apply Nemhauser et al. Or can we…  Mutual information is submodular  F(A) = I(A;V\A) So, we should be able to use Nemhauser et al. Mutual information is not monotone!!!  Initially, adding sensor increases MI; later adding sensors decreases MI F(;) = I(;;V) = 0 F(V) = I(V;;) = 0 F(A) ¸ 0 mutual information A=; A=V num. sensors

19 Approximate monotonicity of mutual information
If H(X|A) – H(X|V\A) ¸ 0, then MI monotonic Solution: Add grid Z of unobservable locations If H(X|A) – H(X|ZV\A) ¸ 0, then MI monotonic H(X|A) << H(X|V\A) MI not monotonic For sufficiently fine Z: H(X|A) > H(X|ZV\A) -  MI approximately monotonic V\A A Z – unobservable X

20 Theorem: Mutual information sensor placement
Greedy MI algorithm provides constant factor approximation: placing k sensors, 8 >0: Approximate monotonicity for sufficiently discretization – poly(1/,k,,L,M) – sensor noise, L – Lipschitz const. of kernels, M – maxX K(X,X) Result of our algorithm Constant factor Optimal non-myopic solution

21 Different costs for different placements
Theorem 1: Constant-factor approximation of optimal locations – select k sensors Theorem 2: (Cost-sensitive placements) In practice, different locations may have different costs Corridor versus inside wall Have a budget B to spend on placing sensors Constant-factor approximation – same constant (1-1/e) Slightly more complicated than greedy algorithm [Sviridenko / Krause, Guestrin]

22 Mutual information has 3 times less variance than entropy criterion
Model learned from 54 sensors Deployment results “True” temp. prediction “True” temp. variance Mutual information has 3 times less variance than entropy criterion Used initial deployment to select 22 new sensors Learned new GP on test data using just these sensors Posterior mean variance Entropy criterion Mutual information criterion

23 Comparing to other heuristics
Greedy Algorithm we analyze Random placements Pairwise exchange (PE) Start with a some placement Swap locations while improving solution Our bound enables a posteriori analysis for any heuristic Assume, algorithm TUAFSPGP gives results which are 10% better than the results obtained from the greedy algorithm Then we immediately know, TUAFSPGP is within 70% of optimum! Better mutual information

24 Precipitation data Entropy criterion Mutual information Entropy
Better Entropy criterion Mutual information Entropy Mutual information

25 Computing the greedy rule
At each iteration For each candidate position i 2{1,…,N}, must compute: Requires inversion of NxN matrix – about O(N3) Total running time for k sensors: O(kN4) Polynomial! But very slow in practice  Exploit sparsity in kernel matrix

26 Usually, matrix is only almost sparse
Local kernels  = Covariance matrix may have many zeros! Each sensor location correlated with a small number of other locations Exploiting locality: If each location correlated with at most d others A sparse representation, and a priority queue trick Reduce complexity from O(kN4) to: Only about O(N log N) Usually, matrix is only almost sparse

27 Approximately local kernels
Covariance matrix may have many elements close to zero E.g., Gaussian kernel Matrix not sparse What if we set them to zero? Sparse matrix Approximate solution Theorem: Truncate small entries ! small effect on solution quality If |K(x,y)| · , set to 0 Then, quality of placements only O() worse

28 Effect of truncated kernels on solution – Rain data
Improvement in running time Effect on solution quality Better Better About 3 times faster, minimal effect on solution quality

29 Summary Mutual information criterion for sensor placement in general GPs Efficient algorithms with strong approximation guarantees: (1-1/e) OPT-ε Exploiting local structure improves efficiency Superior prediction accuracy for several real-world problems Related ideas in discrete settings presented at UAI and IJCAI this year Effective algorithm for sensor placement and experimental design; basis for active learning

30 A note on maximizing entropy
Entropy is submodular [Ko et al. `95], but… Function F is monotonic iff: Adding X cannot hurt F(A[X) ¸ F(A) Remark: Entropy in GPs not monotonic (not even approximately) H(A[X) – H(A) = H(X|A) As discretization becomes finer H(X|A) ! -1 Nemhauser et al. analysis for submodular functions not applicable directly to entropy

31 How do we predict temperatures at unsensed locations?
Far away points? Interpolation? Overfits temperature position

32 How do we predict temperatures at unsensed locations?
Regression y = a + bx + cx2 + dx3 Few parameters, less overfitting  less sure here more sure here How sure are we about the prediction? y - temperature x - position But, regression function has no notion of uncertainty!!! 


Download ppt "Near-Optimal Sensor Placements in Gaussian Processes"

Similar presentations


Ads by Google