Download presentation
Presentation is loading. Please wait.
Published byAriel Richards Modified over 9 years ago
1
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology, Roorkee 5 th July,2007 Expand Your Engineering Skills (EYES), Summer Internship Program, 2007
2
Introduction Wireless Sensor Networks (WSN) consist of nodes for sensing Temperature Pressure Light Magnetometer Infrared Audio/Video etc Ad hoc WSN may require inter-sensor communication.
3
Problem Nodes are of small physical dimensions Battery operated Major concern is energy consumption Failure of nodes due to energy depletion can lead to Partition of sensor network Loss of critical information Requirement of application/system is that every node should know the data of each other node.
4
Related Work Energy aware routing & efficient information processing. [Shah and Rabaey, 2002] Local compression & probabilistic estimation schemes. [ Luo,2005] Distributed compression & adaptive signal processing in sensor networks with a fusion center. [ Chou, 2003]
5
Our Approach i bit 2 3 4 1
6
Proposed Algorithm Sensor j predicts its own reading, depending upon its past readings and readings from other sensors. Depending upon error between predicted value and actual value i.e. sensor j calculates the compressed bits i using Chebyshev’s inequality method Exact error method
7
Code Construction A codebook to encode data X to i bits. One underlying codebook that is NOT changed among the sensors. Supports multiple compression rates.
8
A Tree-based Codebook 0 0 0 1 1 1
9
Chebyshev’s Inequality Method To prevent decoding errors with i bits Chebyshev bound for probability of decoding error Required value of Value of i :
10
Exact Error Method To prevent decoding errors using i bits As we know exact error in the prediction of sensor data X, number of bits are Send extra bits also, specifying the number of bits in the message.
11
Encoder Sensors X is stored as the closest representation from 2 n values in the root codebook (A/D converter). Mapping from X to the bits that specify the subcode-book at level i is done using
12
Decoder Sensors Decoders receive i -bit value & code sequence f(x). Traverse the tree starting from LSB of code sequence to find appropriate subcode book, S. Calculates the side information Y as Decodes the side information Y, to the closest value in S as
13
Correlation Tracking Linear prediction method Analytically tractable Optimal when readings can be modeled as i.i.d. Gaussian random variables. First sensor always sends its data compressed w.r.t. its own past data. Prediction of X is where
14
Least-Squares Parameter Estimation Prediction error is Choose filter coefficients in order to minimize weighted least squares error. Least squares filter coefficient vector at time k is given by where
15
Recursive Least-Squares (RLS) Algorithm Filter coefficient computation is performed adaptively using RLS where and For initialization, each sensor sends uncoded data samples. In our approach reference sensor updates the corresponding coefficients and sends them to all other sensors.
16
Decoding Errors No decoding errors in exact error method. In Chebyshev’s method, no of encoding bits are specified within a given probability of error and after every 100 samples. Leads to few decoding errors, but results in higher compression.
17
Implementation & Performance Simulations were performed for measurements on humidity data. We assumed a 12 bit A/D converter with a dynamic range of [-128,128]. Simulated results for about 18,000 samples for each sensor (total of 90,000) Sensor orderings are randomized every 500 samples. For RLS training, first 25 samples of each sensor are transmitted without any compression. Coefficients are updated and shared after every 500 samples.
18
Exact Error implementation With each code sequence, extra 4 bits to specify the number of bits are also sent. Decoding Error = 0 Average Energy Saving %= 43.34%
19
Tolerable Noise vs. Prediction Noise
22
Chebyshev’s Inequality method Encoding bits are specified every 100 samples Case I: Probability of Error ( P e ) = 0.5% Average Decoding Error % = 0.07% Average Energy Saving % = 45.74%
23
Tolerable Noise vs. Prediction Noise
26
Chebyshev’s Inequality method Case II: Probability of Error ( P e ) = 1.0% Average Decoding Error % = 0.13% Average Energy Saving % = 49.74%
27
Chebyshev’s Inequality method Case II: Probability of Error ( P e ) = 1.5% Average Decoding Error % = 2.29% Average Energy Saving % = 52.27%
28
Comparison Exact Error MethodChebyshev’s Method ZERO probability of decoding error Compression is low (due to extra bit information) Strict bound ‘Instantaneous approach’ Probability of decoding error within a required bound. Higher Compression can be achieved by varying required probability of error. Loose bound ‘Average approach’.
29
Probability of Error vs. Energy Savings
30
For Temperature Data Exact error method Average energy savings % = 56.66% Average decoding error % = 0 Chebyshev’s method ( P e = 0.01 ) Average energy savings % = 66.98% Average decoding error % = 0.61%
31
For Light Data Exact error method Average energy savings % = 33.52% Average decoding error % = 0 Chebyshev’s method ( P e = 0.01 ) Average energy savings % = 19.29% Average decoding error % = 1.13%
32
Conclusions Energy savings achieved through our simulations are conservative estimates of what can be achieved in practice. Further work can be done on Better predictive models. Better probability of error bound. Can be integrated with an energy saving-routing algorithm to increase the energy savings.
33
Thank You!!!! Queries Please…..
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.