VII-1 Part VII: Foundations of Sensor Network Design Ack: Aman Kansal Copyright © 2005.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

The Capacity of Wireless Networks Danss Course, Sunday, 23/11/03.
The Capacity of Wireless Networks
Impact of Interference on Multi-hop Wireless Network Performance Kamal Jain, Jitu Padhye, Venkat Padmanabhan and Lili Qiu Microsoft Research Redmond.
Coverage in Wireless Sensor Network Phani Teja Kuruganti AICIP lab.
1/22 Worst and Best-Case Coverage in Sensor Networks Seapahn Meguerdichian, Farinaz Koushanfar, Miodrag Potkonjak, and Mani Srivastava IEEE TRANSACTIONS.
Applied Algorithmics - week7
Fast Algorithms For Hierarchical Range Histogram Constructions
Bidding Protocols for Deploying Mobile Sensors Reporter: Po-Chung Shih Computer Science and Information Engineering Department Fu-Jen Catholic University.
4/29/2015 Wireless Sensor Networks COE 499 Deployment of Sensor Networks II Tarek Sheltami KFUPM CCSE COE
Onur G. Guleryuz & Ulas C.Kozat DoCoMo USA Labs, San Jose, CA 95110
Coverage Algorithms Mani Srivastava & Miodrag Potkonjak, UCLA [Project: Sensorware (RSC)] & Mark Jones, Virginia Tech [Project: Dynamic Sensor Nets (ISI-East)]
Cooperative Multiple Input Multiple Output Communication in Wireless Sensor Network: An Error Correcting Code approach using LDPC Code Goutham Kumar Kandukuri.
Routing in WSNs through analogies with electrostatics December 2005 L. Tzevelekas I. Stavrakakis.
1 Stochastic Event Capture Using Mobile Sensors Subject to a Quality Metric Nabhendra Bisnik, Alhussein A. Abouzeid, and Volkan Isler Rensselaer Polytechnic.
Visual Recognition Tutorial
June 4, 2015 On the Capacity of a Class of Cognitive Radios Sriram Sridharan in collaboration with Dr. Sriram Vishwanath Wireless Networking and Communications.
Data Transmission and Base Station Placement for Optimizing Network Lifetime. E. Arkin, V. Polishchuk, A. Efrat, S. Ramasubramanian,V. PolishchukA. EfratS.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
Energy-Efficient Target Coverage in Wireless Sensor Networks Mihaela Cardei, My T. Thai, YingshuLi, WeiliWu Annual Joint Conference of the IEEE Computer.
1-1 CMPE 259 Sensor Networks Katia Obraczka Winter 2005 Topology Control II.
On the Construction of Energy- Efficient Broadcast Tree with Hitch-hiking in Wireless Networks Source: 2004 International Performance Computing and Communications.
1 Worst and Best-Case Coverage in Sensor Networks Seapahn Meguerdichian, Farinaz Koushanfar, Miodrag Potkonjak, Mani Srivastava IEEE TRANSACTIONS ON MOBILE.
A Hierarchical Energy-Efficient Framework for Data Aggregation in Wireless Sensor Networks IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 55, NO. 3, MAY.
The Impact of Spatial Correlation on Routing with Compression in WSN Sundeep Pattem, Bhaskar Krishnamachri, Ramesh Govindan University of Southern California.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Online Data Gathering for Maximizing Network Lifetime in Sensor Networks IEEE transactions on Mobile Computing Weifa Liang, YuZhen Liu.
Lattices for Distributed Source Coding - Reconstruction of a Linear function of Jointly Gaussian Sources -D. Krithivasan and S. Sandeep Pradhan - University.
Visual Recognition Tutorial
1 University of Denver Department of Mathematics Department of Computer Science.
How to Turn on The Coding in MANETs Chris Ng, Minkyu Kim, Muriel Medard, Wonsik Kim, Una-May O’Reilly, Varun Aggarwal, Chang Wook Ahn, Michelle Effros.
Exposure In Wireless Ad-Hoc Sensor Networks S. Megerian, F. Koushanfar, G. Qu, G. Veltri, M. Potkonjak ACM SIG MOBILE 2001 (Mobicom) Journal version: S.
Connected Dominating Sets in Wireless Networks My T. Thai Dept of Comp & Info Sci & Engineering University of Florida June 20, 2006.
Noise, Information Theory, and Entropy
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Special Topics on Algorithmic Aspects of Wireless Networking Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central.
An algorithm for dynamic spectrum allocation in shadowing environment and with communication constraints Konstantinos Koufos Helsinki University of Technology.
Exposure In Wireless Ad-Hoc Sensor Networks Seapahn Meguerdichian Computer Science Department University of California, Los Angeles Farinaz Koushanfar.
The Coverage Problem in Wireless Ad Hoc Sensor Networks Supervisor: Prof. Sanjay Srivastava By, Rucha Kulkarni
Exposure In Wireless Ad-Hoc Sensor Networks Seapahn Meguerdichian Computer Science Department University of California, Los Angeles Farinaz Koushanfar.
Energy Efficient Routing and Self-Configuring Networks Stephen B. Wicker Bart Selman Terrence L. Fine Carla Gomes Bhaskar KrishnamachariDepartment of CS.
The Minimal Communication Cost of Gathering Correlated Data over Sensor Networks EL 736 Final Project Bo Zhang.
Computing and Communicating Functions over Sensor Networks A.Giridhar and P. R. Kumar Presented by Srikanth Hariharan.
SoftCOM 2005: 13 th International Conference on Software, Telecommunications and Computer Networks September 15-17, 2005, Marina Frapa - Split, Croatia.
Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and David H.C. Du Dept. of.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
User Cooperation via Rateless Coding Mahyar Shirvanimoghaddam, Yonghui Li, and Branka Vucetic The University of Sydney, Australia IEEE GLOBECOM 2012 &
Copyright: S.Krishnamurthy, UCR Power Controlled Medium Access Control in Wireless Networks – The story continues.
ENERGY-EFFICIENT FORWARDING STRATEGIES FOR GEOGRAPHIC ROUTING in LOSSY WIRELESS SENSOR NETWORKS Presented by Prasad D. Karnik.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks Seema Bandyopadhyay and Edward J. Coyle Presented by Yu Wang.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
Efficient Computing k-Coverage Paths in Multihop Wireless Sensor Networks XuFei Mao, ShaoJie Tang, and Xiang-Yang Li Dept. of Computer Science, Illinois.
1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 10 Rate-Distortion.
Coverage Problems in Wireless Ad-hoc Sensor Networks Seapahn Meguerdichian 1 Farinaz Koushanfar 2 Miodrag Potkonjak 1 Mani Srivastava 2 University of California,
Toward Reliable and Efficient Reporting in Wireless Sensor Networks Authors: Fatma Bouabdallah Nizar Bouabdallah Raouf Boutaba.
On Mobile Sink Node for Target Tracking in Wireless Sensor Networks Thanh Hai Trinh and Hee Yong Youn Pervasive Computing and Communications Workshops(PerComW'07)
Introduction Wireless Ad-Hoc Network  Set of transceivers communicating by radio.
Honors Track: Competitive Programming & Problem Solving Seminar Topics Kevin Verbeek.
Joint Routing and Scheduling Optimization in Wireless Mesh Networks with Directional Antennas A. Capone, I. Filippini, F. Martignon IEEE international.
- A Maximum Likelihood Approach Vinod Kumar Ramachandran ID:
Data Transformation: Normalization
Computing and Compressive Sensing in Wireless Sensor Networks
Net 435: Wireless sensor network (WSN)
Collaborative Filtering Matrix Factorization Approach
Introduction Wireless Ad-Hoc Network
Capacity of Ad Hoc Networks
Overview: Chapter 2 Localization and Tracking
Presentation transcript:

VII-1 Part VII: Foundations of Sensor Network Design Ack: Aman Kansal Copyright © 2005

VII-2 The basic design problem Phenomenon of Interest Satellites UAVs Robots Static Nodes Passive Tags Network

VII-3 The basic design problem Design a network to extract desired information about a given discrete/continuous phenomenon with high fidelity, low energy and low system cost Questions: –Where should the data be collected –What data and how much data should be collected –How should the information be communicated –How should the information be processed –How can network configuration be adapted in situ Single tool does not answer all questions

VII-4 Multiple Tools for Specific Instances Problem broken down into smaller parts –Coverage and deployment: Geometric optimization –Data fusion: Information theory, estimation theory, Bayesian methods –Communication: network information theory, geometric methods, Bernoulli random graph properties –Energy performance: Network flow analysis, linear programming –Configuration adaptation with mobile nodes: adaptive sampling theory, information gain Not a comprehensive list

VII-5 Communication

VII-6 Network Capacity and Node Density A simple model for a wireless network: –Disk of area A –N nodes –Each node can transmit at W bps Transmission range: disk of radius r –r depends on transmit power –Interference range (1+d)r Communication is successful if –Receiver within r of transmitter –No other interfering transmission in (1+d)r’ of receiver r (1+d)r’ T T’ ACK: Slide adapted from PR Kumar’s plenary talk at Sensys [Gupta and Kumar 2000]

VII-7 How many simultaneous transmissions? N nodes in area A ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-8 Maximum simultaneous transmissions (1+d)r 2 (1+d)r 1 T1T1 T2T2 Shaded circles represent interference range ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-9 Maximum simultaneous transmissions ≥ (1+d)r 2 T1T1 T2T2 ≥ dr 2 ≤ r 2 R2R2 R1R1 ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-10 Maximum simultaneous transmissions ≥ (1+d)r 2 T1T1 T2T2 ≥ dr 1 ≤ r 1 R2R2 R1R1 ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-11 Maximum simultaneous transmissions T1T1 T2T2 ≥ (dr 1 + dr 2) /2 R2R2 R1R1 R 1 R 2 ≥ dr 1 R 1 R 2 ≥ dr 2 Add the two inequalities ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-12 Maximum simultaneous transmissions T1T1 T2T2 dr 1 /2 R2R2 R1R1 At receiver R 1, radius dr 1 /2 is disjoint from any other reception dr 2 /2 ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-13 Maximum simultaneous transmissions dr 1 /2 R2R2 R1R1 dr 2 /2 dr 3 /2 dr 4 /2 Σ i=1 N/2 (1/4){π(dr i ) 2 } ≤ A ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-14 Maximum simultaneous transmissions Σ i=1 N/2 (1/4){π(dr i ) 2 } ≤ A Σ i=1 N/2 r i 2 ≤ 16A / π d 2 Convexity: x1x1 x2x2 f(x 1 )/2+f(x 2 )/2 ≥ f(x 1 /2+x 2 /2) r 2 is a convex function Rewrite: Convexity: Σ i=1 N/2 r i 2 ≤ 32A / π Nd 2 Σ i=1 N/2 r i ≤ 1 N/2 [ ] 2 1 ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-15 Transport Capacity Transport capacity per transmission is W*r i (bit meters/second) Total network transport capacity: Σ i=1 N/2 Wr i ≤W [8A / πN d 2 ] 1/2 Per node transport capacity: O(√1/N ) It can be proved that this is achievable as well –Requires multi-hop operation ACK: Slide adapted from PR Kumar’s plenary talk at Sensys 2004.

VII-16 Design Implications Large network with arbitrary communication pattern is NOT scalable (per node capacity is diminishing) –Multi-hop operation is more efficient How to build a large network? 1.Use multi-user decoding: interference is also information! Active area of research W2W2 W1W1 T1T1 T2T2 R 2.Transmit only a relevant function of the data: in-network processing 3.Exploit correlation in data at multiple transmitters 4.Large systems use hierarchy: add infrastructure

VII-17 In Network Processing Random multi-hop network, one fusion node –certain class of functions (mean, mode, std. deviation, max, k-th largest etc.) can be extracted at rate O{1/log(n)} –Exponentially more capacity than without in network processing: O(1/n) Network protocol design –Tessellate –Fuse locally –Compute along rooted tree of cells Fig. Ack: PR Kumar [Giridhar and Kumar 2005]

VII-18 Routing: Exploit Data Correlation Each sensor generates measurement of data size R Due to correlation, extra information at additional sensor has data size r Fig. Ack: Huiyu Luo D d (a) (b) (c) [Luo and Pottie 2005]

VII-19 Data Fusion

VII-20 Fusing multiple sensors helps Camera 1 View of target in image Estimated target location

VII-21 Fusing multiple sensors helps Camera 2 Camera 1

VII-22 Fusing multiple sensors helps Camera 1 Camera 2 Camera 3 How much do additional sensors help?

VII-23 Resort to Information Theory Model the information content of each sensor Measure the combined information content of multiple sensors assuming best possible fusion methods used –Eg. Compute the distortion achieved for a given data size Can then determine if the improvement in distortion is worth the extra sensors

VII-24 Measuring Information Information depends on randomness Two balls: P red = 0.5, P blue = 0.5 Message has 1 bit of information Let message be: Red ball is chosen = 0, Blue ball is chosen = 1 Let Red ball is chosen = RED, Blue ball is chosen = BLU Still only 1 bit of information

VII-25 Measuring Information Four balls: P red = 0.25, P blue = 0.25, P green = 0.25, P yellow = 0.25 Which ball is chosen = 2 bits of information Let Red = 00, Blue = 01, Green = 10, Yellow = 11

VII-26 Measuring Information Four balls, but unequal probabilities: P red = 0.5, P blue = 0.25, P green = 0.125, P yellow = Which ball is chosen = 1.75 bits of information Save bits on more likely cases: Red = 0, Blue = 10, Green = 111, Yellow = 110 Average number of bits to communicate: 0.5* * * *3 = 1.75 (over many-many balls)

VII-27 Measuring Information Number of bits measured by ENTROPY H = -Σp i *log 2 (p i ) Average number of bits to communicate: 0.5* * * *3 = 1.75 (over many-many balls) Entropy: -[0.5*log(0.5) *log(0.25) *log(0.125) *log(0.125)] = 1.75

VII-28 Example: English alphabet Ignoring case, we have 27 characters –Naively: need 5 bits to represents a character 2 4 = 16 < 27 < 2 5 = 32 Alternatively: measure the probabilities –calculate H ≈ 1.5 bits Efficient methods to automatically learn the probabilities and compress a file are available –Eg: Zip programs Fig Ack: Mackay 2003

VII-29 Conditional Entropy What if only incomplete information is available? Let P red = 0.25, P blue = 0.25, P green = 0.25, P yellow = 0.25 Given Box 1 is chosen: P(red|B1) = 0.5, P(blue|B1) = 0.5, P(green|B1) = 0, P(yellow|B1) = 0 Box 1 Box 2 Now, which ball is chosen = 1 bit of information

VII-30 Conditional Entropy Suppose variable y has partial information about x H(x|y=a) = -Σ p(x i |y=a)*log 2 (p(x i |y=a)) Given Box 1 is chosen: P(red|B1) = 0.5, P(blue|B1) = 0.5, P(green|B1) = 0, P(yellow|B1) = 0 H(color|box=b1) = 0.5*log(0.5) + 0.5*log(0.5) +0*log(0) + 0*log(0) = 1 Box 1 Box 2

VII-31 Conditional Entropy Entropy can also be defined for conditional probabilities –Take weighted average with probabilities of y H(x|y) = -Σ a p(y=a) H(x|y=a) Given Box 1 is chosen: P(box=B1) = 0.75, P(box=B2) = 0.25, H(color|box) = 0.75*H(color|box=B1) * 0.5*H(color|box=B2) = 0.75* *1 = 1 Box 1 Box 2 Conditional entropy = 1 bit of information

VII-32 Example: Unreliable hard disks Conditional entropy helps model noise or channel error Given a hard disk with error probability = e (1-e) e Given e = 0.1 Suppose: want more reliable HDD Ack: Example adapted from Mackay 2003

VII-33 Example: Unreliable hard disks Two Options: –Technology approach: Invent better storage, use more expensive components –Systems approach: build a reliable system around the unreliable HDD Simple Strategy: Store every bit 3 times - 000, 111 –if one bit corrupted, can still fix it –Error probability: 3*e 2 (1-e) + e 3 = Redundant Array 2 bits in error, 3 possible ways all 3 bits in error

VII-34 Example: Unreliable hard disks Not happy with 2.8% error –Want error probability: –Using simple strategy, need 60 disks Not good for embedded design! Alternative: use conditional entropy analysis Really Redundant Array …

VII-35 Example: Unreliable hard disks Recovered data has errors –Has partial information about original data Denote –Y = recovered data –X = original data HDD communicates: –Information in X – information still left in X given Y –H(X) – H(X|Y) Suppose source was P red =P blue =0.5, so H(X)= 1 bit From error model know: P(X|Y). Gives H(X|Y) = 0.47 –HDD returns 0.53 bits for each bit stored Two HDD’s suffice! (can store 1.06 bits > 1 bit) Not So Redundant Array

VII-36 Example: Unreliable hard disks Recovered data has errors –Has partial information about original data Denote –Y = recovered data –X = original data HDD communicates: –Information in X – information still left in X given Y –H(X) – H(X|Y) Suppose source was P red =P blue =0.5, so H(X)= 1 bit From error model know: P(X|Y). Gives H(X|Y) = 0.47 –HDD returns 0.53 bits for each bit stored Two HDD’s suffice! (can store 1.06 bits > 1 bit) Not So Redundant Array CAVEAT: This holds only when a very large amount of data is stored. Simple repetition strategy does not work- better coding needed.

VII-37 Mutual Information The quantity H(X) – H(X|Y) is known as MUTUAL INFORMATION Written as I(X;Y) = H(X) – H(X|Y) –Helpful for measuring “recovered information” in many problems –Eg. Capacity of a channel (such as a hard disk or a wireless channel) Max I(X;Y) P(X)

VII-38 Rate Distortion Suppose the HDD had zero error probability but we stored more data than its capacity –Can think of it as storing with loss due to error: again characterized using mutual information –Using lesser bits than information content in original data Eg. JPEG, MP3 Given a distortion constraint D, how much data, R, is required to be stored: –R(D) = min I(X;X’) X’ is such that it satisfies the distortion constraint and minimizes the mutual information

VII-39 Gaussian Phenomenon Rate distortion function has been calculated when X is a Gaussian random variable –X = Gaussian with mean = 0, std dev = σ –R(D) = (1/2)*log(σ 2 /D) for D ≤ σ 2. If tolerable distortion more than phenomenon variance, no need to send any data X X’ R Proof: [Thomas and Cover]

VII-40 Effect of Sensing Noise For given data rate, what’s the distortion achievable? –R(D) measure with noise: –R(D) = (1/2)*log [σ 4 / (σ 2 D + σ N 2 D - σ 2 σ N 2 )] When positive, else zero rate X Phenomenon + N Y X = Estimate of X ^ Gaussian Noise

VII-41 Quantifying the Fusion Advantage Characterize the distortion achieved with given rate when multiple sensors used –Distortion advantage gives the benefit of additional sensors Phenomenon + N1N1 Y1Y1 + NLNL YLYL Estimation Sensor Readings + N2N2 Y2Y2 … X X ^ … Encoder

VII-42 Quantifying the Fusion Advantage The total rate generated at multiple sensors is related to the distortion in estimation as: [Chen et al. 2004] –where L is the number of sensors, σ X 2 is the variance of X and σ N 2 is the noise variance

VII-43 Design Implications When all sensors have same noise, fixed D Using multiple sensors helps –Returns diminish beyond a small number of sensors Number of Sensors Total Rate This is for Gaussian noise and Gaussian phenomenon

VII-44 Another application of Mutual Information Mobile sensor: Having taken a measurement, where to take the next measurement? Among all possible motion steps, choose one which maximizes mutual information between phenomenon and measurement –Assume that density of phenomenon and noise model for sensor known Also extended to teams of mobile sensors [Grocholsky 2002] x y

VII-45 Coverage and Deployment

VII-46 What is Coverage? Given: –Field A –N sensors, specified by coordinates How well can the field be observed ? Measure of QoS of sensor network Bad Coverage Good Coverage

VII-47 Ability to Detect a Breach Likelihood of detection depends on distance from sensor and time in range [Meguerdichian et al, 2001]

VII-48 Notions of Coverage Distance to closest sensor –Worst case coverage: Maximal Breach Path –Best case coverage: Maximal Support Path Exposure to sensors –Consider distance –Worst case coverage: Minimal Exposure Path Localized distributed algorithms –Query from user roaming in the sensor field –Computation done by the nodes themselves –Only relevant sensor nodes involved in the computation Advanced –Effect of target speed –Heterogeneous sensors –Terrain-specific measured or statistical exposure models –Probability of detection

VII-49 Problem Formulation Given: –Initial location (I) of an agent –Final location (F) of an agent Find: –Maximal Breach Path: Path along which the probability of detection is minimum Represents worst case coverage –Maximal Support Path: Path along which the probability of detection is maximum Represents best case coverage

VII-50 Closest Sensor Model: Maximal Breach Path Problem: find the path between I & F with the property that for any point p on the path the distance to the closest sensor is maximized Observation: maximal breach path lies on the Voronoi Diagram Lines –by construction each line segment maximizes the distance from the nearest point Given : Voronoi diagram D with vertex set V and line segment set L and sensors S Construct graph G(N,E): Each vertex v i  V corresponds to a node n i  N Each line segment l i  L corresponds to an edge e i  E Each edge e i  E, Weight(e i ) = Distance of l i from closest sensor s k  S Search for P B : Check for existence of I  F path using BFS  Search for path with maximal, minimum edge weights Ref: based on slides by Seapahn Megerian

VII-51 Example Result Example: Max Breach Path in a 50-node network Ref: based on slides by Seapahn Megerian

VII-52 Maximal Support Given: Field A instrumented with sensors; areas I and F. Problem: Identify P S, the maximal support path in S, starting in I and ending in F. P S is defined as a path with the property that for any point p on the path P S, the distance from p to the closest sensor is minimized.

VII-53 Maximal Support Given: Delaunay Triangulation of the sensor nodes Construct graph G(N,E): The graph is dual to the Voronoi graph previously described Formulation: what is the path from which the agent can best be observed while moving from I to F? (The path is embedded in the Delaunay graph of the sensors) Solution: Similar to the max breach algorithm, use BFS and Binary Search to find the shortest path on the Delaunay graph.

VII-54 Simulation Result: Critical Regions For both breach and support, lower values of breach_weight and support_weight indicate better coverage

VII-55 Exposure Model of Sensors Likelihood of detection by sensors is a function of time interval and distance from sensors. Minimal exposure paths indicate the worst case scenarios in a field: –Can be used as a metric for coverage Sensor detection coverage Also, for wireless (RF) transmission coverage Ref: based on slides by Seapahn Megerian

VII-56 Sensing Model Sensing model S at an arbitrary point p for a sensor s : where d(s,p) : distance between the s and p constants and K are technology and environment dependent Slide adapted from Meguerdichian, Mobicom 2001]

VII-57 Effective sensing intensity at point p in field F : Aggregate of all sensors Closest Sensor (Can generalize for k closest sensors) Slide adapted from Meguerdichian, Mobicom 2001] (Can generalize for fusion based metrics) Intensity Model

VII-58 Definition: Exposure The Exposure for an object O in the sensor field during the interval [t 1,t 2 ] along the path p(t) is: Methods available to compute this for a given network (using grid approximations and graph search techniques) Slide adapted from Meguerdichian, Mobicom 2001]

VII-59 Average Coverage Quantify coverage with respect to phenomenon distribution in covered region –sensors close to phenomenon contribute more to coverage [Cortes et al, 2005] Shade represents phenomenon density = sensor

VII-60 Average Coverage Region to cover: Q –q is a point in region Q Set of sensors: p i, i=1,…,n Coverage function: f, depends on distance from sensor –Coverage at point q due to sensor p i is f(|q-p i |) Average coverage then becomes –Considering best sensor for each point q in Q Coverage at point q due to best sensor Weight by phenomenon density Integrate over region covered

VII-61 Example Suppose sensing quality declines with square of distance –This gives f(x) = -x 2 Best sensor for a point q is thus the closest sensor to it –p i responsible for points that are closer to p i than any other sensor –Call such a region around p i as the VORONOI cell of p i Voronoi cells

VII-62 Example Average coverage for this f can be written as: –Integral over each Voronoi cell –Sum over all cells –Where V i (P) is the Voronoi cell of p i –Negative sign indicates that coverage reduces if distance from sensors is large

VII-63 Planning Deployment Choose sensor locations to maximize coverage: –Eg. 1: Minimize worst case coverage by maximizing the exposure on minimum exposure path –Eg. 2: Maximize

VII-64 Adapting Deployment at Real Time Suppose nodes are mobile Need distributed algorithm that allows nodes to change their locations in order to maximize coverage –Algorithm should use only local information and not the global view of the network One method to compute optima: gradient descent

VII-65 Gradient Descent Consider a function f of two variables- x, y Find the x,y where f(x,y) is minimal –start at a random x,y –Compute gradient at that (x,y): gives direction of maximum change. (steepest slope) –Change (x,y) in that direction May reach local minima x y f(x,y) gradient

VII-66 Distributed Method Take the gradient of the average coverage Gradient turns out to be distributed over Voronoi cells! –where M is the mass of phenomenon density over the Voronoi cell of p i and CM is the center of mass.

VII-67 Distributed Method Distributed Motion algorithm: At each node p i –Find Voronoi cell –Move towards center of mass Automatically optimizes total network coverage –No central coordination needed Fig. Ack: F Bullo, April 15, 2005 talk at UCB Initial Gradient descent Final

VII-68 Parting Thoughts on Coverage Fundamental question: Given an area A and the desired coverage –What is the minimum number of nodes required to achieve the desired coverage given the deployment scheme ? –What is the probability of achieving the desired coverage for a random deployment ? Things to ponder about –What is a good measure of coverage? –What is the mapping from probability of detection to coverage ? –Sophisticated sensing model Directional Sensing Effect of obstacles More than one type of sensors Speed dependence –Deploying nodes to achieve coverage

VII-69 Energy Performance

VII-70 Maximizing the lifetime [Bhardwaj and Chandrakasan 2002] B R Fig Ack. M Bhardwaj, INFOCOM 2002

VII-71 Problem Formulation Given –Set of data sources and intended destinations –Fixed battery levels at each node Find the best routing strategy: maximum lifetime for supporting required data transfers –Nodes can change their power level and choose best multi-hop paths

VII-72 Energy Models E tx =  11 +  2 d n d n = Path loss indexTransmit Energy Per Bit E rx =  12 Receive Energy Per Bit E relay =  11 +  2 d n +  12 =  1 +  2 d n d Relay Energy Per Bit E sense =  3 Sensing Energy Per Bit Slide adapted from: M Bhardwaj, INFOCOM 2002

VII-73 Example: a three node network B d char d char /2 R 0 : 1  B R 1 : 1  2  B R 2 : 1  3  B R 3 : 1  2  3  B Lifetime Optimal R 0 : 0 R 1 : R 2 : R 3 : Min-hop R 0 : 0.25 R 1 : 0 R 2 : 0 R 3 : Min-Energy R 0 : 0 R 1 : 0 R 2 : 1.0 R 3 : Ignore wasteful (loopy) routes, route choices are: Slide adapted from: M Bhardwaj, INFOCOM 2002 Other Choices…

VII-74 How to find the optimal choice? The search over multiple route choices can be stated as a linear program (LP) –Efficient tools exist for LP’s t i = time for which route r i is used, |F| = number of choices

VII-75 Solving the LP for large networks The number of route choices, |F|, is exponential in number of nodes, N –LP becomes too complex for large N Map to a network flow problem which can be solved in polynomial time –Allows finding the optimal route choices for a given network in polynomial time

VII-76 Equivalence to Flow Problem B 12 3 B 12 3 R 0 : 0 (0) R 1 : (3/11) R 2 : (3/11) R 3 : (5/11) (11/11) f 1  2 : 8/11 f 1  3 : 3/11 f 1  B : 0 f 2  3 : 3/11 f 2  B : 5/11 f 3  B : 6/11 3/11 5/11 3/11 + 5/11 3/11 5/11 3/11 + 3/11 Route Choices: number of routes is exponential in N Network Flow View: number of possible edges is polynomial in N Slide adapted from: M Bhardwaj, INFOCOM 2002

VII-77 Polynomial time LP

VII-78 Sensing Lifetime If data sources not specified, but given –Phenomenon to be sensed –Distortion quality required Choose the relevant sensors and determine the routes for maximum lifetime B Phenomenon [Kansal et al, 2005]

VII-79 Choose closest sensor Small data size (since measurement quality is high) Large routing energy cost (possibly long route) B Phenomenon

VII-80 Choose sensor with smallest route Small routing energy cost Large data size (measurement quality is bad) B Phenomenon

VII-81 Collaborative sensing Use multiple sensors and fuse B Phenomenon

VII-82 Lifetime-Distortion Relationship How to determine the best choice? –Combine lifetime maximization tools with data fusion analysis Find the possible choices of sensors for required distortion performance –Using data fusion analysis For each choice, find maximum achievable lifetime –using network flow LP Choose the sensors and routes which maximize lifetime

VII-83 Sensor Selection All sensors within sensing range of the phenomenon are potential choices –How much data generated at each sensor: possible choices for given distortion are given by the rate allocation region of the Gaussian CEO result Phenomenon R1R1 R2R2 R2R2 R1R1

VII-84 Energy Constraints for Chosen Sensors Same as before, but written for the variable source sensor rates –Maximize t as before t

VII-85 Conclusions Network design depends on multiple methods –Optimal methods sometimes yield distributed protocols –Provide insight into the performance of available distributed methods Saw some examples of analytical tools that may help network design Open problem: Convergence of separate solutions