Download presentation
Presentation is loading. Please wait.
Published byEmory Kennedy Modified over 9 years ago
1
1 Network-Aware Distributed Algorithms for Wireless Networks Nitin Vaidya Electrical and Computer Engineering University of Illinois at Urbana-Champaign
2
2
3
3 Multi-Channel Wireless Networks: Theory to Practice Nitin Vaidya Electrical and Computer Engineering University of Illinois at Urbana-Champaign
4
4 Wireless Networks Infrastructure-Based Networks Infrastructure-Less (and Hybrid) Networks: –Mesh networks, ad hoc networks, sensor networks
5
What Makes Wireless Networks Interesting? Broadcast channel Interference management non-trivial Signal-interference are relative notions A B C D power Signal Interference
6
6 What Makes Wireless Networks Interesting? Many forms of diversity Time Route Antenna Path Channel
7
7 What Makes Wireless Networks Interesting? Antenna diversity C D A B Sidelobes not shown
8
8 What Makes Wireless Networks Interesting? Path diversity x 1 x 2 y 1 y 2
9
9 What Makes Wireless Networks Interesting? Channel diversity A B A B Low gain High gain A B C D A B C D Low interference High interference
10
Research Challenge Dynamic adaptation to exploit available diversity 10
11
11 Net-X Multi- Channel Wireless Mesh Theory to Practice Multi-channel protocol Channel Abstraction Module IP Stack Interface Device Driver User Applications ARP Interface Device Driver OS improvements Software architecture Capacity & Scheduling channels capacity A B C D E F Fixed Switchable Insights on protocol design Net-X testbed
12
12
13
Secret to happiness is to lower your expectations to the point where they're already met 13 with apologies to Bill Watterson (Calvin & Hobbes)
14
14 Network-Aware Distributed Algorithms for Wireless Networks Nitin Vaidya Electrical and Computer Engineering University of Illinois at Urbana-Champaign
15
Distributed Algorithms & Communications 15 Communications / Networking Distributed Algorithms
16
Distributed Algorithms & Communications Problems with overlapping scope But cultures differ 16 Communications / Networking Distributed Algorithms
17
17 Distributed Algorithms Black box networks Emphasis on order complexity Emphasis on “exact” performance metrics Constants matter Communications / Networking
18
18 Distributed Algorithms Black box networks Emphasis on order complexity Emphasis on “exact” performance metrics Constants matter Information transfer (typically “raw” info) Communications / Networking
19
19 Distributed Algorithms Computation affects communication Emphasis on “exact” performance metrics Constants matter Information transfer (typically “raw” info) Communications / Networking Black box networks Emphasis on order complexity
20
Distributed Algorithms & Communications 20 Communications / Networking Distributed Algorithms
21
Outline Two distributed algorithms Byzantine agreement Scheduling (CSMA) 21 Rate Region Communications / Networking Distributed Algorithms
22
Rate Region Defines the way links may share channel Interference posed to each other determines whether a set of links should be active together 22
23
“Ethernet” Rate Region 23 S 1 2 Rate S1 + Rate S2 ≤ C R1 + R2 ≤ C Private channels S1 and S2 Rate S1 Rate S2 sum-rate constraint
24
Point-to-Point Network Rate Region Rij ≤ Capacity ij 24 S 1 2 Each directed link independent of other links
25
Wireless Network: Rate Region Some links share channel with each other while others don’t 12 4 3 R1 R2 R3 max(R1/C1, R3/C3) + (R2/C2) ≤ 1
26
Broadcast Channel: Rate Region R ≤ C1 S 2 3 1
27
Broadcast Channel: Rate Region > C1 S 2 3 R ≤ C2 “Range” varies inversely with rate 1
28
Broadcast Channel S 2 3 1 S 2 3 1 R1 R2 R12 R1/C1 + R2/C2 + R12/C12 ≤ 1
29
Outline Two distributed algorithms Byzantine agreement Scheduling (CSMA) 29
30
Impact of Rate Region Network rate region affects ability to perform multi-party computation Example: Byzantine agreement (broadcast) 30
31
Byzantine Agreement: Broadcast Source S wants to send message to n-1 receivers Fault-free receivers agree S fault-free agree on its message Up to f failures
32
Impact of Rate Region How does rate region affect broadcast performance ? How to quantify the impact ? 32
33
Throughput of Agreement Borrow notion of throughput from communications literature b(t) = number of bits agreed upon in [0,t] 33 Long timescale measure
34
Capacity of Agreement Supremum of achievable throughputs for a given rate region
35
Broadcast Channel Rate region R ≤ C 35 Agreement capacity = C S 2 3 1 R
36
“Ethernet” Rate Region Sum of private link capacities ≤ C 36 Agreement capacity = C Communication complexity per agreed bit 1 S 2 3
37
“Ethernet” Rate Region Communication complexity per-agreed bit 37 L number of bits required to agree on L bits =
38
“Ethernet” Rate Region Communication complexity per-agreed bit 38 L number of bits required to agree on L bits =
39
“Ethernet” Rate Region Communication complexity per-agreed bit L = 1 : Ω(n 2 ) for n node [Dolev-Reischuk] (deterministic algorithms) 39 L number of bits required to agree on L bits =
40
“Ethernet” Rate Region Communication complexity per-agreed bit L = 1 : Ω(n 2 ) for n nodes L ∞ : can be shown O(n) (multi-value agreement) 40 L number of bits required to agree on L bits =
41
“Ethernet” Rate Region Communication complexity per-agreed bit L = 1 : Ω(n 2 ) for n nodes L ∞ : can be shown O(n) (multi-value agreement) 41 L number of bits required to agree on L bits = 41 bits per agreed-bit n(n-1) (n-f)
42
“Ethernet” Rate Region Sum of private link capacities ≤ C 42 Agreement capacity ≥ C n(n-1) (n-f) Conjecture: tight bound 1 S 2 3
43
A S B C Point-to-Point Network Each link has its own capacity Load ij ≤ Cij
44
A S B C 4 2 4 3 3 4 4 3 3 Point-to-Point Network Each link has its own capacity Cij as shown Agreement Capacity ?
45
Point-to-Point Network Cij as shown Agreement Capacity = 2 A S B C 4 2 4 3 3 4 4 3 3
46
Point-to-Point Network є Cij as shown Agreement Capacity = 6 A S B C 4 2 4 3 3 4 4 3 3
47
A S B C Point-to-Point Network Capacity-achieving scheme for Arbitrary 4 node networks Approach: Upper bound based on min-cuts Lower bound using coding
48
A S B C Point-to-Point Network Capacity-achieving scheme for Arbitrary 4 node networks Minimum number of rounds required depends on link capacities
49
A S B C Point-to-Point Network Open problem: Everything else Capacity-achieving scheme for Arbitrary 4 node networks
50
Open Problems Capacity-achieving agreement with general rate regions Subset of nodes as “receivers” 50
51
Open Problems Capacity-achieving agreement with general rate regions Subset of nodes as “receivers” Even the multicast problem with Byzantine nodes is unsolved - For multicast, source S fault-free 51
52
Rich Problem Space Broadcast channel allows overhearing Transmit to 2 at high rate, or low rate ? - Low rate allows reception at 1 (broadcast advantage) 52 S 2 3 1
53
Rich Problem Space Broadcast channel allows overhearing Transmit to 2 at high rate, or low rate ? - Low rate allows reception at 1 (broadcast advantage) 53 S 2 3 1 Low rate
54
Rich Problem Space Broadcast channel allows overhearing Transmit to 2 at high rate, or low rate ? - Low rate allows reception at 1 (broadcast advantage) 54 S 2 3 1 High rate
55
Rich Problem Space How to model & exploit reception with probability < 1 ? –Need opportunistic algorithms Use of available diversity affects rate region –How to dynamically adapt to channel variations ? 55
56
Rich Problem Space Similar questions relevant for any multi-party computation 56 Communications / Networking Distributed Algorithms
57
And Now for Something Completely Different * * Monty Python 57
58
Outline Two distributed algorithms Byzantine agreement Scheduling (CSMA) 58
59
Scheduling Objective Network stability 12 4 3 L0 L2 L3
60
Scheduling Objective Network stability 12 4 3 L0 L2 L3 12 4 3 L0 L2 L3
61
Scheduling 12 4 3 L0 L2 L3 1/2 Arrival rates
62
12 4 3 L0 L2 L3 Arrivals in even slots Arrivals in odd slots
63
End of slot 0 12 4 3 L0 L2 L3 0 0
64
End of slot 1 12 4 3 L0 L2 L3 1 01 Low priority to L2
65
End of slot 2 12 4 3 L0 L2 L3 1 0 2 2 2 Low priority to L2
66
End of slot 3 12 4 3 L0 L2 L3 1 0 2 2 3 3 Low priority to L2
67
End of slot 4 12 4 3 L0 L2 L3 1 0 4 2 3 4 4 2 Traffic not stabilized High priority to L2 will stabilize this
68
Throughput-Optimal Scheduler A scheduler is throughput-optimal if it can serve all schedulable traffic [Tassiulas92] Schedule = arg max ∑ ri qi Load 1 Load 2
69
Throughput-Optimal CSMA (Carrier-Sense Multiple Access) Continuous-time CSMA-like algorithm shown to achieve stability [Jiang-Walrand’08] Extended to discrete-time CSMA-like algorithms in later work CSMA model: A link can sense conflicting transmissions
70
70 CSMA model: A link can sense conflicting transmissions 12 4 3 L0 L2 L3
71
71
72
Imperfect Carrier Sensing 72 Conflicting transmissions may not always be sensed, potentially leading to collisions
73
Imperfect Carrier Sensing Stability with imperfect carrier sensing ? Yes, almost 73
74
Proposed CSMA Algorithm Two access probability: a : probability with which a node attempts to transmit first packet in a “train” p : probability with which a “train” is extended 74
75
DATA Scheduling Example probe ACK DATA probe Access by a A Access by a B Access by p B Sensed busy by Link A & C Preempted by Link B Sensed idle by Link A & C probe ACK DATA probe ACK DATA Preempted by Link A & C probe BA ABCABC A and C may transmit together
76
With CSMA Failure probe ACK probe Access by a A Access by a B Access by p B Sensed busy by Link A & C Preempted by Link B Sensed idle by Link A & C CSMA failure at B probe DATA BA probe ACK DATA probe ACK DATA ABCABC A and C may transmit together
77
Stability with Sensing Failure Small enough access probability (a) suffices to stabilize arbitrarily large fraction of rate region Continuation probability (p) being function of queue size 77
78
Open Problems Carrier sensing failures … correlation over time and space Asymmetric collisions Dynamic adaptation to time-varying channel 78
79
What does this have to do with distributed algorithms ? 79
80
Network stability No semantics attached to bits Traffic patterns weakly constrained Distributed congestion control Awareness of algorithm’s objective Traffic completely specified by the algorithm Distributed control ? 80 Distributed algorithms
81
Can the gap be bridged? Multi-party algorithms that dynamically adapt to network characteristics 81 Communications / Networking Distributed Algorithms
82
Can the gap be bridged? Theory versus practice: How to exploit the diversity? Unknowns in practice (unknown unknowns as well) 82 Communications / Networking Distributed Algorithms
83
Thanks! www.crhc.illinois.edu / wireless
84
Thanks! www.crhc.illinois.edu / wireless
85
Goal: Agreement on a large file 85 File Message Separate instance of “mini”-algorithm for each message
86
Back-up slides 86
87
BA complexity for sum-rate constraint Goal: Agreement on a large file 87 File Message (n-f) data symbols (2n-2, n-f) code
88
2 2 2 1 1 88 n-1 receivers 2(n-1) symbol codeword of dimension n-f
89
Algorithm Outline 89 Initial machine M0 M1 Mmax No more failures time O(n)
90
CSMA 90
91
Scheduling Objective Network stability L2 L3 L0 Rate region characterized by conflict graph 12 4 3 L0 L2 L3 Network
92
Throughput-Optimal Scheduler Schedule = arg max ∑ qi (for constant r) max ( q0+q3, q2) Centralized scheduler 12 4 3 L0 L2 L3
93
Channel Access Model Last α -duration of each time slot for carrier sense Access probability a Continuation probability p
94
Preemptive CSMA Two access probabilities: a i and p i Carrier sense u(t): preemption x(t): transmission schedule C i : set of conflict links of i ACK reception
95
Carrier Sense Failure: Main Result By choosing small enough access probability, possible to stabilize arbitrarily large fraction of capacity region Proof complexity: Markov chain is no longer reversible Use perturbation theory for Markov chains
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.