Download presentation
Presentation is loading. Please wait.
Published byArron Butler Modified over 9 years ago
1
Techniques for Building Long-Lived Wireless Sensor Networks Jeremy Elson and Deborah Estrin UCLA Computer Science Department And USC/Information Sciences Institute Collaborative work with R. Govindan, J. Heidemann, and SCADDS of other grad students
2
What might make systems long-lived? zConsider energy the scarce system resource yMinimize communication (esp. over long distances) xComputation costs much less, so: xIn-network processing: aggregation, summarization yAdaptivity at fine and coarse granularity xMaximize lifetime of system, not individual nodes xExploit redundancy; design for low duty-cycle operation yExploit non-uniformities when you have them xTiered architecture yNew metrics
3
What might make systems long-lived? zRobustness to dynamic conditions: Make system self-configuring and self-reconfiguring yAvoid manual configuration yEmpirical adaptation (measure and act) zLocalized algorithms prevent single points of failure and help to isolate scope of faults yAlso crucial for scaling purposes!
4
The Rest of the Talk zSome of our initial building blocks for creating long-lived systems: yDirected diffusion - a new data dissemination paradigm yAdaptive fidelity yUse of small, randomized identifiers yTiered architecture yTime synchronization
5
Directed Diffusion A Paradigm for Data Dissemination z Key features yname data, not nodes yinteractions are localized ydata can be aggregated or processed within the network ynetwork empirically adapts to best distribution path, the correct duty cycle, etc. 2. Reinforcement 1. Low data rate 3. High data rate
6
Diffusion: Key Results z Directed diffusion yCan provide significantly longer network lifetimes than existing schemes yKeys to achieving this: xIn-network aggregation xEmpirical adaptation to path z Localized algorithms and adaptive fidelity yThere exist simple, localized algorithms that can adapt their duty cycle y… they can increase overall network lifetime
7
Adaptivity I: Robustness in Data Diffusion A primary goal of data diffusion is robustness through empirical adaptation: measuring and reacting to the environment. no failures 20% node failure 10% node failure Because of this adaptation, mean latency (shown here) for data diffusion degrades only mildly even with 10%-20% node failure.
8
Adaptivity II: Adaptive Fidelity zextend system lifetime while maintaining accuracy zapproach: yestimate node density needed for desired quality yautomatically adapt to variations in current density due to uneven deployment or node failure yassumes dense initial deployment or additional node deployment zzz
9
Adaptive Fidelity Status zapplications: ymaintain consistent latency or bandwidth in multihop communication ymaintain consistent sensor vigilance zstatus: yprobablistic neighborhood estimation for ad hoc routing x30-55% longer lifetime with 2-6sec higher initial delay ycurrently underway: location-aware neighborhood estimation
10
Small, Random Identifiers zSensor nets have many uses for unique identifiers (packet fragmentation, reinforcement, compression codebooks...) zIt’s critical to maximize usefulness of every bit transmitted; each reduces net lifetime (Pottie) zLow data rates + high dynamics = no space to amortize large (guaranteed unique) ids or claim/collide protocol zSo: use small, random, ephemeral transaction ids? yLocality is key: random ids much smaller than guaranteed unique ids if total net size large and transaction density small yID collisions lead to occasional losses; persistent losses avoided because the identifiers are constantly changing yMarginal cost of occasional losses is small compared to losses from dynamics, wireless conditions, collisions…
11
AFF Allows us to optimize # bits used for identifiers Fewer bits = fewer wasted bits per data bit, but high collision rate; vs. More bits = less waste due to ID collisions but many bits wasted on headers Address-Free Fragmentation Data Size=16 bits
12
zConsider a memory hierarchy: registers, cache, main memory, swap space on disk zDue to locality, provides the illusion of a flat memory that has speed of registers but size & price of disk space zSimilar goal in sensor nets: we want a spectrum of hardware within a network with the illusion of yCPU/memory, range, scaling properties of large nodes yPrice, numbers, power consumption, proximity to physical phenomena of the smallest Exploit Non-Uniformities I: Tiered Architecture
13
zWe are implementing a sensor net hierarchy: PC-104s, tags, motes, ephemeral one-shot sensors zSave energy by yRunning the lower power and more numerous nodes at higher duty cycles than larger ones yHaving low-power “pre-processors” activate higher power nodes or components (Sensoria approach) zComponents within a node can be tiered too yOur “tags” are a stack of loosely coupled boards yInterrupts active high-energy assets only on demand Exploit Non-Uniformities I: Tiered Architecture
14
Exploit Non-Uniformities II: Time Synchronization zTime sync is critical at many layers; some affect energy use/system lifetime yTDMA guard bands yData aggregation & caching yLocalization zBut time sync needs are non-uniform yPrecision yLifetime yScope & Availability yCost and form factor zNo single method optimal on all axes
15
Exploit Non-Uniformities II: Time Synchronization zUse multiple modes y“Post-facto” synchronization pulse yNTP yGPS, WWVB yRelative time “chaining” zCombinations can (?) be necessary and sufficient, to minimize resource waste yDon’t spend energy to get better sync than app needs yWork in progress…
16
Conclusions zMany promising building blocks exist, but zLong-lived often means highly vertically integrated and application-specific yTraditional layering often not possible zChallenge is creating reusable components common across systems zCreate general-purpose tools for building networks, not general purpose networks
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.