Collection Tree Protocol Omprakash Gnawali (Stanford University) with Rodrigo Fonseca (Brown University) Kyle Jamieson (University College London) David Moss (People Power Company) Philip Levis (Stanford University) ACM SenSys November 4, 2009
2 Collection Anycast route to the sink(s) –Used to collect data from the network to a small number of sinks (roots, base stations) –Network primitive for other protocols A distance vector protocol sink
Common Architecture 3 Router ForwarderLink Estimator Link Layer Application Control PlaneData Plane Fwd Table
Prior Work 4 Link Layer Control PlaneData Plane ETX, MT, MultiHopLQI, EAR, LOF, AODV, DSR, BGP, RIP, OSPF, Babel Flush, RMST, CODA, Fusion, IFRC, RCRT
Wireless Link Dynamics s
Control and Data Rate Mismatch Can lead to poor performance 6 Link Layer Control PlaneData Plane 10 pkt/s1 beacon/30s 0 pkt/s1 beacon/s
CTP Noe 7 Router ForwarderLink Estimator Link Layer Application Control PlaneData Plane
CTP Noe’s Approach Enable control and data plane interaction Two mechanisms for efficient and agile topology maintenance –Datapath validation –Adaptive beaconing 8 Control Plane Data Plane
Summary of Results % delivery ratio –Testbeds, configurations, link layers Compared to MultihopLQI –29% lower data delivery cost –73% fewer routing beacons –99.8% lower loop detection latency Robust against disruption Cause for packet loss vary across testbeds 9
Outline Collection Datapath validation Adaptive beacons Evaluation Conclusion 10
Datapath validation Use data packets to validate the topology –Inconsistencies –Loops Receiver checks for consistency on each hop –Transmitter’s cost is in the header Same time-scale as data packets –Validate only when necessary 11
12 Routing Loops –Cost does not decrease D A B X C
Routing Consistency Next hop should be closer to the destination Maintain this consistency criteria on a path Inconsistency due to stale state 13 nini n i+1 nknk
14 Detecting Routing Loops Datapath validation –Cost in the packet –Receiver checks Inconsistency –Larger cost than on the packet On Inconsistency –Don’t drop the packets –Signal the control plane D A B C X < 6.3? 3.2 < 4.6? 5.8 < 8.1? 4.6<5.8? < 4.6?
Outline Collection Datapath validation Adaptive beacons Evaluations Conclusion 15
How Fast to Send Beacons? Using a fixed rate beacon interval –Can be too fast –Can be too slow –Agility-efficiency tradeoff Agile+Efficient possible? 16
Routing as Consistency Routing as a consistency problem –Costs along a path must be consistent Use consistency protocol in routing –Leverage research on consistency protocols –Trickle 17
Trickle Detecting inconsistency –Code propagation: Version number mismatch –Does not work for routing: use path consistency Control propagation rate –Start with a small interval –Double the interval up to some max –Reset to the small interval when inconsistent 18
19 Control Traffic Timing Extend Trickle to time routing beacons Reset the interval ETX(receiver) >= ETX(sender) Significant decrease in gradient “Pull” bit Increasing interval Reset interval TX
20 Adaptive Beacon Timing Infrequent beacons in the long run ~ 8 min Tutornet
Adaptive vs Periodic Beacons 21 Time (mins) Total beacons / node Less overhead compared to 30s-periodic 1.87 beacon/s 0.65 beacon/s Tutornet
Node Discovery 22 Time (mins) Total Beacons A new node introduced Efficient and agile at the same time Path established in < 1s Tutornet
Outline Collection Datapath validation Adaptive beacons Evaluation Conclusion 23
Experiments 12 testbeds nodes 7 hardware platforms 4 radio technologies 6 link layers 24 Variations in hardware, software, RF environment, and topology
Evaluation Goals Reliable? –Packets delivered to the sink Efficient? –TX required per packet delivery Robust? –Performance with disruption 25
CTP Noe Trees 26 Kansei Twist Mirage
Reliable, Efficient, and Robust TestbedDelivery Ratio Wymanpark Vinelab Tutornet NetEye Kansei Mirage-MicaZ Quanto Blaze Twist-Tmote Mirage-Mica2dot Twist-eyesIFXv Motelab High end-to-end delivery ratio (but not on all the testbeds!) Retransmit False ack
28 Reliable, Efficient, and Robust High delivery ratio across time (short experiments can be misleading!) Tutornet Delivery cost / pkt Time (hrs)
Reliable, Efficient, and Robust 29 Low data and control cost Tutornet CTP Noe
Reliable, Efficient, and Robust 30 Link Layer Duty-cycle Low duty-cycle with low-power MACs Motelab, 1pkt/5min
Reliable, Efficient, and Robust 31 Time (mins) Delivery Ratio 10 out of 56 nodes removed at t=60 mins No disruption in packet delivery Tutornet
Nodes reboot every 5 mins 32 Reliable, Efficient, and Robust Delivery Ratio > 0.99 Routing Beacons High delivery ratio despite serious network-wide disruption (most loss due to reboot while buffering packet) ~ 5 min Tutornet
CTP Noe Performance Summary Reliability –Delivery ratio > 90% in all cases Efficiency –Low cost and 5% duty cycle Robustness –Functional despite network disruptions 33
Acknowledgment For testbed access and experiment help Anish Arora Geoffrey Werner Challen Prabal Dutta David Gay Stephen Dawson-Haggerty Timothy Hnat Ki-Young Jang Xi Ju Andreas Köpke Razvan Musaloiu-E. Vinayak Naik Rajiv Ramnath 34 Mukundan Sridharan Matt Welsh Kamin Whitehouse Hongwei Zhang For bug reports, fixes, and discussions Mehmet Akif Antepli Juan Batiz-Benet Jonathan Hui Scott Moeller Remi Ville Alec Woo and many others… Thank You!
Conclusion “Hard” networks → good protocols –Tutornet & Motelab Wireless routing benefits from data and control plane interaction Lessons applicable to distance vector routing –Datapath validation & adaptive beaconing Data trace from all the testbeds available at 35