Download presentation
Presentation is loading. Please wait.
Published byGerald Manning Modified over 9 years ago
1
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign http://www.ifp.uiuc.edu/~vvv (with Jason Fuemmeler) IPAM Workshop on Mathematical Challenges and Opportunities in Sensor Networks, Jan 10, 2007
2
Saving Energy in Sensor Networks Efficient source coding Efficient Tx/Rx design Efficient processor design Power control Efficient routing Switching nodes between active and sleep modes
3
Active Sleep Transition Paging channel to wake up sensors when needed But power for paging channel is usually not negligible compared to power consumed by active sensor Passive RF-ID technology? External Activation
4
Active Sleep Transition Practical Assumption Sensor that is asleep cannot be communicated with or woken up prematurely ⇒ sleep duration has to be chosen when sensor goes to into sleep mode Having sleeping sensors could result in communication/sensing performance degradation Design Problem Find sleeping policies that optimize tradeoff between energy consumption and performance Z Z Z
5
Sleeping Policies Sensor sleeps with deterministic or random (with predetermined statistics) duty cycle Synchronous or asynchronous across sensors Duty cycle chosen to provide desired tradeoff between energy and performance Simple to implement, generic active sleep active Duty Cycle Policy
6
Smart (Adaptive) Policies Use all available information about the state of the sensor system to set sleep time of sensor Application specific ⇒ system-theoretic approach required Potential energy savings over duty cycle policies
7
Tracking in Dense Sensor Network Sensor detects presence of object within close vicinity Sensors switch between active and sleep modes to save energy Sensors need to come awake in order to detect object
8
Design Problem Having sleeping sensors could result in tracking errors Design Problem Find sleeping policies that optimize tradeoff between energy consumption and tracking error
9
General Problem Description Sensors distributed in two-dimensional field Sensor that is awake can generate an observation Object follows random (Markov) path whose statistics are assumed to be known
10
General Problem Description Central controller communicates with sensors that are awake Sensor that wakes up remains awake for one time unit, during which it: reports its observation to the central controller receives new sleep time from central controller sets its sleep timer to new sleep time and enters sleep mode Central Controller
11
Markov Decision Process Markov model for object movement with absorbing terminal state when object leaves system State consists of two parts: Position of object Residual sleep times of sensors Control inputs: New sleep times Exogenous input: Markov object movement Central Controller
12
Partially Observable Markov Decision Process (POMDP) The state of the system is only partially observable at each time step (POMDP) Object position not known -- only have distribution for where the object might be Can reformulate MDP problem in terms of this distribution (sufficient statistic) and residual sleep times Central Controller
13
Sensing Model and Cost Structure Sensing Model: Each sensor that is awake provides a noisy observation related to object location Energy Cost: each sensor that is awake incurs cost of c Tracking Cost: distance measure d(.,.) between actual and estimated object location Central Controller
14
Dynamic System Model Posterior Nonlinear Filter Optimal location estimate w.r.t. distortion metric Sleeping Policy Sensor Observations
15
Simple Sensing, Object Movement, Cost Model Sensors distributed in two-dimensional field Sensor that is awake detects object without error within its sensing range Sensing ranges cover field of interest without overlap Object follows Markov path from cell to cell Tracking cost of 1 per unit time that object not seen
16
What Can Be Gained 0 n 1 Number of sensors awake per unit time Tracking errors per unit time Always Track Duty Cycle
17
Always Track Policy Central Controller 1 n Unit random walk movement of object
18
Always Track Asymptotics 1 n E[# awake per unit time] » O(log n) E[# awake per unit time] » n 0.5 1 n
19
Dynamic System Model Posterior Nonlinear Filter Optimal location estimate w.r.t. distortion metric Sleeping Policy Sensor Observations
20
Nonlinear filter (distribution update) k k+1
21
Optimal Solution via DP Can write down dynamic programming (DP) equations to solve optimization problem and find Bellman equation However, state space grows exponentially with number of sensors DP solution is not tractable even for relatively small networks
22
Separating the Problem Problem separates into set of simpler problems (one for each sensor) if: Cost can be written as sum of costs under control of each sensor (always true) Other sensors’ actions do not affect state evolution in future (only true if we make additional unrealistic assumptions) We make unrealistic assumptions only to generate a policy, which can then be applied to actual system
23
FCR Solution At time sensor is set to sleep assume we will have no future observations of object (after sensor comes awake) Policy is to wake up at first time that expected tracking cost exceeds expected energy cost Thus termed First Cost Reduction (FCR) solution
24
Q MDP Solution At time sensor is set to sleep, assume we will know location of object perfectly in future (after sensor comes awake) Can solve for policy with low complexity Assuming more information than is actually available yields lower bound on optimal performance!
25
Line Network Results
28
Two Dimensional Results
29
Offline Computation Can compute policies on-line, but this requires sufficient processing power and could introduce delays Policies need to be computed for each sensor location and each possible distribution for object location Storage requirements for off-line computation may be immense for large networks Off-line computation is feasible if we replace actual distribution with point mass distribution Storage required is n values per sensor
30
Point Mass Approximations Two options for placing point mass: Centroid of distribution Nearest point to sensor on support of distribution 1 n
31
Distributed Implementation Off-line computation also allows for distributed implementation!
32
Partial Knowledge of Statistics Support of distribution of object position can be updated using only support of conditional pdf of Markov prior! Thus “nearest point” point mass approximation is robust to knowledge of prior 1 n
33
Point Mass Approximation Results
35
Conclusions Tradeoff between energy consumption and tracking errors can be considerably improved by using information about the location of the object Optimal solution to tradeoff problem is intractable, but good suboptimal solutions can be designed Methodology can be applied to designing smart sleeping for other sensing applications, e.g., process monitoring, change detection, etc. Methodology can also be applied to other control problems such as sensor selection
36
Future Work More realistic sensing model More realistic object movement models Object localization using cooperation among all awake sensors at each time step Joint optimization of sensor sleeping policies and nonlinear filtering for object tracking Partial known or unknown statistics for object movement Decentralized implementation Tracking multiple objects simultaneously
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.