Announcement Office hour this week: Office hour next week Today 4:30-5:30pm Friday 10-11am Office hour next week Friday 3-4pm
Leveraging Diversity Leverage diversity Antenna diversity MIMO, MUP, MRD Topology diversity Channel assignment, SSCH, partial overlapping channels, power control Path diversity Routing metrics, opportunistic routing Application diversity Delay tolerant vs. delay in-tolerant
Atul Adya, Paramvir Bahl, Jitendra Padhye, Alec Wolman, Lidong Zhou A Multiple-Radio Unification Protocol for IEEE 802.11 Wireless Networks Atul Adya, Paramvir Bahl, Jitendra Padhye, Alec Wolman, Lidong Zhou Microsoft Research
Question? Given multiple radios on each node, how to effectively utilize them?
Proposed solution: MUP Overview “Unifies” the operation of multiple 802.11 radios tuned to different channels Objective Use as much of the available bandwidth for improved performance with existing 802.11 hardware/software Assumption Nodes equipped with multiple NICs with similar properties
4 Goals of MUP design Must not require changes to existing hardware does require IEEE 802.11e-based hardware Must not require changes to existing application, transport, or routing protocols Must inter-operate with legacy hardware Must not require knowledge of network topology
MUP high level architecture Single virtual MAC Periodically monitors “channel quality” across all NICs Prior to transmission, selects the channel with highest quality
Packet transmission using MUP: initialization Application Transport Network Logical Link Control ARP MUP Neighbor table Updated periodically NIC1 NIC2 NICn Tuned to C0 Tuned to C1 Tuned to Ck Set at power-up
Packet transmission using MUP Application Pkt Transport Network Logical Link Control ARP MUP Neighbor table get neighbor info NIC1 NIC2 NICn SRTT0 SRTT1 SRTTk select interface to transmit packet
Which interface to use? MUP-Random Based on “Channel Quality” metric Potentially reduces contention or NOT, could select a channel currently being used Based on “Channel Quality” metric Send periodic probe messages across all interfaces Measure time delay to when an acknowledgment is received Compute smoothed round-trip time (SRTT) Completely autonomous operation
2 Major components of MUP Neighbor discovery and classification Construct MUP Neighbor Table Communication protocol
Neighbor discovery Intercepts ARP requests/responses Logical Link Control ARP MUP Who has Dest_IP, Tell Src_IP Broadcast across all interfaces NICa NICb NICn MAC_addra MAC_addrb MAC_addrn
Neighbor discovery (continued) Logical Link Control ARP MUP Neighbor table entry NICa NICb NICn Dest_IP is at MAC_addrq Dest_IP is at MAC_addrr Dest_IP is at MAC_addrs
Neighbor classification MUP-enabled node broadcasts CS (channel select) message across all resolved interfaces ARP initiated repeatedly up to N-times to resolve unresolved interfaces MUP-enabled node responds with CS-ACK Legacy nodes will ignore the CS message Update neighbor table entry Delete entry that timeouts with no activity
MUP communication protocol Two MUP-enabled nodes periodically exchange probe packets Probe packets are CS messages Sent approximately every 500 msec Node immediately replies with CS-ACK Node measures packet latency Node computes “Channel Quality” as smoothed round-trip time (SRTT)
SRTT Motivation for SRTT Heavily used channels will typically experience large delay to gain access to the medium Interference from external devices can cause excessive packet delays or even loss Problem: probe packets can experience very large queuing delay if node is sending a large amount of data Solution: require NICs support priority queues
IEEE 802.11e overview Relatively new standard to add Quality of Service support to 802.11 Hardware to be available “soon” Provides 8 separate priority queues per station Ranges from best-effort to voice Each priority level has specific backoff settings Abitration interframe spacing
Switching channels Channel is chosen if it provides at least 10% improvement in SRTT (to avoid flapping) Otherwise current channel is used Randomize time interval between changing channels Avoids synchronized switching across node SRTT adjusted for lost CS and CS-ACK packets Once decided to switch, when to actually switch?
Current policy: switch immediately Possible problem: new packets will likely be transmitted before the old queue empties If 3 out-of-order packets are received, TCP will halve its congestion window Possible solution: allow old queue to drain before queuing in new interface. However Could introduce significant delay
Interference experiments How many orthogonal channels are there really? 3 configurations Netgear WAB501 cards in 802.11a and 802.11b modes Cisco Aironet 340 in 802.11b mode TCP 6” separation for Netgear NICs 3” vertical separation for Cisco NICs A B C D TCP Tuned each hop for each experiment
Interference experiment results
Protocol evaluation Through implementation and NS-2 simulations NDIS driver under Windows XP Channel switching and impact of queue experiments CBR (50 Kbps) TCP
Channel switching results A switches to ch1 A switches to ch11 start C to D transfer over ch11 end transfer start new D transfer over ch1
Impact of queuing A sends TCP traffic to B over channel 1 C sends TCP traffic to D over channel 11 IEEE 802.11e is needed to accurately measure channel quality.
Simulation setup Modified NS-2 to send CS and CS-ACK at high priority 12 wireless nodes, all in communication range Traffic pattern 2 of 12 send probe packets and measure SRTT Other 10 (5 pairs) send CBR traffic at 200 Kbps Repeated using Web-like traffic
Computed SRTT results
Benefits of intelligent channel selection 16-node grid using AODV routing protocol Traffic patterns S-node sends FTP traffic to D-node over multiple hops 4 UDP flows with source/ destination pairs chosen randomly On/Off time set randomly Traffic intensity is ratio of mean On time to mean Off time
Intelligent channel selection results Tunable parameters
Traffic striping Packets are transmitted concurrently across multiple interfaces (instead of single interface) Potentially increases throughput Simple striping If a source node can communicate with destination node over multiple interfaces, packets are transmitted in round-robin fashion Load-aware striping Same as above but only stripes if measured SRTT is within 10%
Striping channel selection results Percentage Improvement over single channel
Web traffic in suburban topology Simulated topology Seattle suburban area 35 houses selected at random for Internet access 4 houses surfing the web Web server located at access point 3 Scenarios All legacy, half MUP, all MUP
Web traffic results
Discussion Other metrics for channel quality? Unicast or broadcast? MUP or striping?
Improving Loss Resilience with Multi-Radio Diversity in Wireless Networks Allen Miu, Hari Balakrishnan MIT Computer Science and Artificial Intelligence Laboratory C. Emre Koksal Computer and Communication Sciences, EPFL
Motivation Wireless channels are loss-prone Current solutions to cope with loss Automatic repeat request (ARQ) FEC Rate adaptation MIMO Pros & Cons?
Today’s wireless LAN (e.g., 802.11) May use only one path Uses only one communication path AP1 Internet
Today’s wireless LAN (e.g., 802.11) Multi-Radio Diversity (MRD) – Uplink May use only one path Allow multiple APs to simultaneously receive transmissions from a single transmitter 10% MRDC AP1 20% Internet AP2 Loss independence simultaneous loss = 2%
Multi-Radio Diversity (MRD) – Downlink Allow multiple client radios to simultaneously receive transmissions from a single transmitter MRDC AP1 Internet AP2
Are losses independent among receivers? Broadcast 802.11 experiment at fixed bit-rate: 6 simultaneous receivers and 1 transmitter Compute loss rates for the 15 receiver-pair (R1, R2) combinations Frame loss rate FLR(R1), FLR(R2) vs. simultaneous frame loss rate FLR(R1 ∩ R2)
Individual FLR > Simultaneous FLR y = x FLR R1 R2 R1*R2 FLR(R1 ∩ R2)
Challenges in developing MRD How to correct simultaneous frame errors? Frame combining How to handle retransmissions in MRD? Request-for-acknowledgment protocol How to adapt bit rates in MRD? MRD-aware rate adaptation
Bit-by-bit frame combining Combine failure TX: 1100 1010 2. Select bit combination at unmatched bit locations, check CRC Patterns CRC Ok 1 1100 0000 R1 1100 0000 -- 1101 1010 R2 1100 0010 X 0001 1010 1. Locate bits with unmatched value 1100 1000 X 1100 1010 O Corrected frame Problem: Exponential # of CRC checks in # of unmatched bits.
Block-based frame combining Observation: bit errors occur in bursts Divide frame into NB blocks (e.g., NB = 6) Attempt recombination with all possible block patterns until CRC passes # of checks upper bounded by 2NB Explore bursty bit-error Failure rate increases with NB when uniform error
Failure decreases with NB and burst size 1.0 Frame size = 1500B Probability of failure 0.8 0.6 NB = 2 0.4 NB = 4 0.2 NB = 6 … NB = 16 10 20 30 40 50 Burst error length parameter
How to Perform Error-Control?
Option 1: Directly use 802.11 retransmission scheme Conventional link-layer ACKs do not work Final status known only to MRDC
Option 2: Disable 802.11 retransmission scheme Problems Sending our own ACK is more expensive than sending 802.11 ACK (Why?) Hard to do rate control
Retransmission in MRD Two levels of ACKs Use 802.11 ACK for per-packet acknowledgement 802.11 ACK can be directly used for CSMA No need to content for the medium Send MRD-ACK via ACK compression Sender retransmits when MRD-ACK not received upon timeout
Request-for-acknowledgment (RFA) for efficient feedback DATA IP IP RFA MRD MRD MRD-ACK DATA DATA link link link ACK MRDC
MRD-aware rate adaptation Standard rate adaptation does not work Reacts only to link-layer losses from 1 receiver Uses sub-optimal bit-rates MRD-aware rate adaptation Reacts to losses at the MRD-layer Implication: First use multiple paths, then adapt bit rates.
Experimental setup ~20 m R2 R1 L 802.11a/b/g implementation in Linux (MADWiFi) L transmits 100,000 1,472B UDP packets w/ 7 retries L is in motion at walking speed, > 1 minute per trial Variants: R1, R2, MRD (5 trials each)
MRD improves throughput 18.7 Mbps 2.3x Improvement Throughput (Mbps) 8.25 Mbps R1 R2 MRD Each color shows a different trial
MRD maintains high bit-rate Fraction of transmitted frames Frame recovery data (% of total losses at R1) via R2 42.3% frame combining 7.3% Total 49.6% 6 9 12 18 24 36 48 56 Selected bit rate (Mbps)
Delay Analysis 10-4 10-3 10-2 10-1 1 Fraction of delivered packets User space implementation caused high delay 10-4 10-3 10-2 10-1 1 One way delay (10x s)