Download presentation
Presentation is loading. Please wait.
Published byPascale Marion Modified over 6 years ago
1
Zhichao Cao, Jiliang Wang, Daibo Liu, Xin Miao,
Chase++: Fountain-Enabled Fast Flooding in Asynchronous Duty Cycle Networks Zhichao Cao, Jiliang Wang, Daibo Liu, Xin Miao, Qiang Ma and Xufei Mao INFOCOM 2018, Hawaii Thank chair for the introduction. I will introduce our protocol Chase++, which enable fast flooding with fountain code in asynchronous duty cycle networks.
2
Internet of Things (IOT)
Urban CO2 Monitoring Oilfield Monitoring In internet of things, many devices are connected with wireless techniques, such as wifi, zigbee, bluetooth, lora, RFID and so on. This work is from our real projects. We have deployed two real ad-hoc wireless networks of IOT applications. One system is sensor network with zigbee radio. Over 1200 sensors are deployed in 1.12km^2 area. This system is used to monitor urban environment of wuxi city. Another system is mesh network with both wifi and zigbee radios. Over 80 mesh nodes cover 60 well pads. This system is deployed for petrochina and used to obtain the real-time oil production data and security camera data. 1200+ sensor nodes cover over 1.12 km2 area in WuXi, China. 80+ Mesh nodes cover 60+ well pads, PetroChina. 12/4/2018 INFOCOM 2018, Hawaii
3
Asynchronous Duty Cycle (ADC)
In practice, since sensor nodes are energy constraint, one important problem is how to extend the life time of each node. In this way, we can save the cost of battery recharge and replacement. Asynchronous duty cycling is an efficient way to save the energy consumption of radio. In asynchronous duty cycle mode, every node periodically turn on its radio to receive the potential data traffic by signal strength sampling. The sleep schedule of different nodes is different. Thus, the sender keeps to transmit the same data packets, called preamble packet, to wake up the receiver. In the example, sender S keeps transmitting preamble packets. A and B detect the transmission at different time and keep awake for a while to receive preamble packet. Without synchronization, the cost of state maintenance is low. A B Low cost approach to achieve extremely low power consumption. 12/4/2018 INFOCOM 2018, Hawaii
4
Data Delivery Network flooding Parameter update Time Synchronization
Binary image dissemination Besides extending node lifetime, another important aspect is the efficiency of data delivery. In this work, we focus on network flooding. Network flooding is widely used in some critical services such as parameter update, time synchronization and binary image dissemination. Each node will broadcast the received packets till all nodes successfully receive the data. 12/4/2018 INFOCOM 2018, Hawaii
5
ADC Flooding Protocols
Trickle Timer Transmission Overhear While asynchronous duty cycle network flooding is not a new topic. We categories existing designs according to both control cost and concurrency. deluge and drip separately use trickle timer and transmission overhear to increase the backoff time and reduce the probability of contention and collision. Although there is no cost, the concurrency is almost zero due to the long time backoff is usually needed, especially in dense network. Deluge, Sensys’04 Drip, EWSN’05 12/4/2018 INFOCOM 2018, Hawaii
6
ADC Flooding Protocols
Probe Coordination Local Synchronization Link Quality Aware Link Correlation Aware ADB, Sensys’09 OFlood, Mobicom’09 CFlood, NSDI’10 ECD, ICNP’11 Further, ADB, Oflood, Cflood and ECD utilize explicit sleep schedule and link properties to efficiently reduce the backoff time. However, link estimation and local time synchronization need extra cost. Deluge, Sensys’04 Drip, EWSN’05 12/4/2018 INFOCOM 2018, Hawaii
7
ADC Flooding Protocols
Glossy, IPSN’11 Splash, NSDI’13 Pando, Sensys’15 Time Synchronization Constructive Interference ADB, Sensys’09 OFlood, Mobicom’09 CFlood, NSDI’10 ECD, ICNP’11 In recent years, Glossy, Splash and pando use constructive interference to enable nodes can forward data concurrently. However, due to the strict requirements of time synchronization, a larger cost is needed to call all nodes wake-up from asynchronous duty cycle sleep. Deluge, Sensys’04 Drip, EWSN’05 12/4/2018 INFOCOM 2018, Hawaii
8
ADC Flooding Protocols
Glossy, IPSN’11 Splash, NSDI’13 Pando, Sensys’15 Chase++ Chase, ICNP’16 Random Capture Effect Adaptive Tail Extension ADB, Sensys’09 OFlood, Mobicom’09 CFlood, NSDI’10 ECD, ICNP’11 Based on capture effect, Chase uses random inter packet interval and adaptive tail extension to achieve concurrent broadcasting with no control cost. However, the efficiency of concurrent broadcast may degrade when the packet length is long. Our protocol Chase++ is here. It provides more efficient concurrent broadcasting in comparison with Chase. Deluge, Sensys’04 Drip, EWSN’05 12/4/2018 INFOCOM 2018, Hawaii
9
Observation of Chase++
In Chase, long tail time when payload length is long. x 5.9 Here is the the key observations of our protocol. We conduct the experiments on a 50 TelosB nodes testbed to evaluate the performance of Chase under different payload length. The results are shown in this figure. The tail length indicates the efficiency of concurrent broadcasting. The higher the tail length, the higher the delay is. We can see that listen tail length increases with the increase of payload length. More specifically, when the payload length increases from 10 bytes to 100 bytes, the median value of tail is increased by 3.6 times and 75% value of tail is increased by 5.9 times, respectively. x 3.6 12/4/2018 INFOCOM 2018, Hawaii
10
An Intuitive Solution Split a long packet into several short packets.
According to the observation, the short packet has lower tail time. To speed up network flooding, for long packet, an intuitive solution is to split a long packet into several short packet. Specifically, in this example, if the three senders concurrently broadcast the long packet, the three receivers may experience a long delay. S3 R3 Long Packet Long Packet 12/4/2018 INFOCOM 2018, Hawaii
11
An Intuitive Solution Split a long packet into several short packets.
1 2 3 1 2 3 S2 R2 1 2 3 1 2 3 If we split the long packet into three short packets. The three senders concurrently broadcast the three short packets one by one. The three receivers may quickly collect all three short packets so that recover the original long packet. S3 R3 1 2 3 1 2 3 12/4/2018 INFOCOM 2018, Hawaii
12
Challenges Long tail of short packets collection
The continuous loss of one short packet Channel utilization optimization Large packet size Crowd channel Small packet size Collect more packets However, the intuitive solution cannot stand well in all situations. It mainly face two challenges. First, considering the packet loss, if a receiver continuously lose one short packet, the delay will become even longer. The long tail problem still exists. Second, how to optimize the channel utilization is not easy. If the size of short packet is still large, the channel will be too crowd so that the channel utilization is degraded and delay becomes large. If the size of short packet is too small, a receiver has to collect more short packets to recover the original long packet. It underutilizes the channel resource and the delay may be also high. 12/4/2018 INFOCOM 2018, Hawaii
13
Basic Idea of Chase++ Rateless Coding Adaptive packet size setting.
Mitigating the negative influence of continuous packet loss. Design issue How to adaptively partition long packet and select coding schemes? Adaptive packet size setting. Improving channel utilization. How to quantify the relationship between current channel utilization and packet size with low cost? The basic ideas of Chase++ are as follow To address the long tail problem incurred by single packet loss, we use rateless coding to mitigate negative influence of packet loss. To achieve high feasibility in practical, the design issue is “How to adaptively partition long packet and select coding schemes?” To optimize the channel utilization, each node adaptively set its packet size. The design issue is “How to quantify the relationship between current channel utilization and packet size with low cost? ” 12/4/2018 INFOCOM 2018, Hawaii
14
Payload Partition Model
This figure shows an example of our payload partition model. Given a long payload (70 bytes) and a payload block size (10 bytes), we first partition it into several small payload blocks. Then, we use fountain code to encode these small payload blocks to a lot of rateless payload blocks. Finally, according to current channel state, we can adjust the length of preamble packet by batching different number of (3) rateless payload blocks. 12/4/2018 INFOCOM 2018, Hawaii
15
Fountain Coding Schemes (Encoding)
Linear Random (LR) Coding LT Coding Regular Generator Matrix Irregular Generator Matrix The first problem is to choose a coding scheme with good encoding and decoding efficiency. We compare two widely used fountain coding schemes, Linear Random (LR) coding and LT coding. For encoding, LR coding uses a regular generator matrix, as shown in this figure, assume we have total K short payload blocks and we want to generate N rateless payload blocks. every encoded payload block has 50% probability to contain any short payload block. In LT coding, however, the generator matrix is irregular. It randomly chooses the short payload blocks for each coding packet. The selection distribution is predetermined. 12/4/2018 INFOCOM 2018, Hawaii
16
Fountain Coding Schemes (Decoding)
Linear Random (LR) Coding LT Coding Gaussian Elimination High computation complexity Less rateless packets for full rank decoding Belief Propagation Light weight computation complexity More rateless packets for decoding For decoding, LR coding uses Gaussian Elimination with high computation complexity. With LR coding, the receiver needs less rateless packets to construct the full rank matrix. In comparison, due to the non-uniform distribution of encoding, LT coding can use a light weight belief propagation algorithm for decoding. However, the receiver needs more rateless packets for decoding. We will empirically study the performance of both coding schemes in the evaluation. 12/4/2018 INFOCOM 2018, Hawaii
17
Channel Utilization Metric
Channel Capacity Redundancy (CCR) Reflect the remain channel resource during concurrent broadcasting. The Maximum Inter Packet Interval The Maximum Packet On-air Time Then, we empirically propose a metric called CCR (Channel Capacity Redundancy) to reflect the remain channel resource during concurrent broadcasting. In the equation, delta indicates the spatial efficiency of capture effect. The larger the delta is, the more channel resource remains for sending. IPPI-max indicates the maximum inter packet interval between the adjacent preamble packets. The larger the IPPI-max is, the less frequent a node compete the channel resources. Nc plus Nt indicates the number of concurrent senders. The larger the number is the less the remained channel resource is. T-max-on-air is the maximum packet on-air time which indicates the maximum packet length. From the equation, to compute CCR, a important information is the estimation of the number of concurrent senders. Spatial Efficiency of Capture Effect The Number of Concurrent Senders 12/4/2018 INFOCOM 2018, Hawaii
18
Concurrent Sender Estimation
Locally sampled RSS (Received Signal Strength) Sequence Spatial RSS value clustering Temporal feature checking We estimate the number of concurrent senders according to locally sampled RSS sequence. The first step of our method is spatial RSS value clustering. As shown in the figure, hopefully different concurrent senders have different sampled RSS values. We can estimate the number of concurrent senders by clustering the spatial RSS values. However, it is possible that several concurrent senders have similar RSS values. We further use the temporal feature to calibrate the estimation results. In this way, a node can locally estimate the CCR without any extra control cost. 12/4/2018 INFOCOM 2018, Hawaii
19
Bath Size Calculation The number of payload blocks in each preamble packet MAC Header Size Radio Bandwidth Finally, according to the estimated CCR, payload block size lp, MAC Header size lmac, radio bandwidth capital B, we can calculate the batch size lamda for individual node. The minimum batch size is 1. Payload Block Size 12/4/2018 INFOCOM 2018, Hawaii
20
Implementation TelosB, TinyOS 2.1.2 Parameter Settings
We implement Chase++ with TinyOS on TelosB platform. We empirically set the system parameters, like the RSS threshold of clustering algorithm. 12/4/2018 INFOCOM 2018, Hawaii
21
Evaluation Indriya Testbed[1] Local Testbed Public Testbed 95 TelosB
Deploying across 3 floors at NUS. Local Testbed 50 TelosB [1] M. Doddavenkatappa, M.C. Chan, and A.L. Ananda, "Indriya: A Low-Cost, 3D Wireless Sensor Network Testbed," In TRIDENTCOM, 2011. We evaluate the performance of chase++ on two testbeds. Indriya testbed is a public testbed. It contains 95 telosb nodes which are deployed acrross 3 floors at NUS. We also use a local testbed with 50 TelosB nodes, which is denser than Indriya. 12/4/2018 INFOCOM 2018, Hawaii
22
Payload Partitioning and Coding
Payload Block Size Coding Schemes Local Testbed Local Testbed We first evaluate the influence of payload block size and coding schemes on Local Testbed. Setting payload block size as 10 can better adjust the size of preamble packet while reducing the decoding overhead. As we expected, LR coding has better decoding efficiency, but worse encoding efficiency. Since the number of rateless payload blocks is limited, in comparison with packet collection delay, the encoding delay is small. The overall performance of LR coding is better. LR is preferred. 12/4/2018 INFOCOM 2018, Hawaii
23
Channel Utilization Estimation
Error of Concurrent Sender Estimation Spatial Efficiency of Capture Effect Local Testbed Both Testbeds Then, we evaluate the efficiency of channel utilization. The error of concurrent sender increases with the increasing of the number of concurrent senders. The average estimation error is 3 when the number of concurrent senders are 10. This is because the overlapping of signals significantly degrades the efficiency of our algorithm. We evaluate the influence of different delta on both testbeds. For local testbed, the best performance appears when delta is set as 1. In contrast, the best delta is 2 for Indriya testbed. This verifies that Indriya Testbed have better spatial efficiency of capture effect due to the wide area deployment. 12/4/2018 INFOCOM 2018, Hawaii
24
Testbed Experiments Average Completion Time
23.6% and 13.4% improvement compared with state-of- the-art on both Local and Indriya Testbeds. Finally, this figure shows the performance distribution of different flooding protocols on testbed experiment in terms of completion time. The results show in comparison with Chase, when the payload length is 100 bytes, chase++ can provide about 23.6% and 13.4% improvement on average on Local Testbed and Indriya testbed. 12/4/2018 INFOCOM 2018, Hawaii
25
Summary With low computation and zero communication cost, Chase++ speeds up network flooding in low duty cycle wireless network. Key strategy Fountain coding Adaptive channel utilization estimation Achieve better performance on two real testbeds. To conclude, we propose chase++ which speeds up network flooding in low duty cycle wireless network. The key strategies are fountain coding based payload partition and adaptive channel utilization estimation using RSS sequence. The experiments on real testbesds show the better performance than state-of-the-arts. 12/4/2018 INFOCOM 2018, Hawaii
26
Thank you Q&A
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.