Download presentation
Presentation is loading. Please wait.
Published byChase Grant Modified over 11 years ago
1
Balaji Prabhakar Active queue management and bandwidth partitioning algorithms Balaji Prabhakar Departments of EE and CS Stanford University balaji@stanford.edu
2
2 Overview of lecture Active queue management –Background CHOKe: a randomized AQM algorithm –The basic algorithm –Enhancements –A model and fluid analysis AFD: a bandwidth partitioning algorithm –Description of algorithm and performance
3
3 The Setup In a congested network with many users –whose quality-of-service (QoS) requirements are different Problems: –allocate bandwidth –control queue size and hence delay
4
4 Approach 1: Network-centric Network node: fair queueing –weve fair queueing (WFQ) in EE 384X User traffic: any type problem: complex implementation
5
5 Approach 2: User-centric Network node: simple FIFO + AQM schemes –AQM: Active Queue Management (e.g. RED) User traffic: congestion-aware (e.g. TCP) problem: requires user cooperation
6
6 Network node: –simple FIFO buffer –AQM schemes with enhancement to provide fairness: preferential dropping packets User traffic: any type Current trend
7
7 AQM and RED Passive queue management: DropTail –when the output buffer is full, simply drop an arriving packet –this signals congestion to sources DropTail had a number of problems 1.TCP sources send packets in bursts (due to ack compression). So, drops occur in bursts, causing TCP sources to Slow-Start 2.Synchronization between various sources could occur: they end up using bandwidth in periodic phases, leading to ineffeciences. RED was designed to fix these problems –drops incoming packets randomly based on congestion level –this signals the onset of congestion to the sources who will back- off (if they are responsive) –thus, RED avoided bursty dropping, synchronization, etc.
8
8 AQM and RED Although RED is simple to implement –it cant prevent an unresponsive flow from eating up all the bandwidth –because, a source that simply doesnt back-off when there are packet drops will end up causing the others to back-off and eat up the available bandwidth. So, a lot of research was dedicated to finding AQM schemes that would be simple to implement and partition the bandwidth fairly; or, at least prevent a single flow from taking up all the bandwidth
9
9 A Randomized Algorithm: First Cut Consider a single link shared by 1 unresponsive (red) flow and k responsive (green) flows Suppose the buffer gets congested Observe: It is likely there are more packets from the red (unresponsive) source So if a randomly chosen packet is evicted, it will likely be a red packet Therefore, one algorithm could be: When buffer is congested evict a randomly chosen packet
10
10 Comments Unfortunately, this doesnt work because there is a small non- zero chance of evicting a green packet Since green sources are responsive, they interpret the packet drop as a congestion signal and back-off This only frees up more room for red packets Next: Suppose we choose two packets at random from the queue and compare their ids, then it is quite unlikely that both will be green This suggests another algorithm: Choose two packets at random and drop them both if their ids agree This works: That is, it limits the maximum bandwidth the red source can consume
11
11 The CHOKe Algorithm Builds on the previous observation Is a randomized algorithm Turns out to have an easily analyzable performance via fluid models The last point is interesting, since it shows how surprisingly accurate fluid models are for modeling TCP- and UDP-type traffics
12
12 The CHOKe Algorithm Admit new packet Arriving packet y n y Drop both packets Draw a packet at random from queue end n Drop the new packet end Admit packet with a probability p end y n AvgQsize <= Min th ? Both packets from same flow? AvgQsize <= Max th ?
13
13 Simulation Comparison: The setup R1 1Mbps 10Mbps S(2) S(m) S(m+n) TCP Sources S(m+1) UDP Sources S(1) R2 D(2) D(m) D(m+n) TCP Sinks D(m+1) UDP Sinks D(1) 10Mbps
14
14 1 UDP source and 32 TCP sources
15
15 CHOKe: Bandwidth Shares
16
16 CHOKe: Drop Decomposition
17
17 CHOKe: Varying UDP Loadings
18
18 CHOKe: Varying UDP Loadings
19
19 CHOKe: 5 UDPs, 5 samples
20
20 Mechanizing Multiple Drops Divide region (min th, max th ) into subregions R 1, R 2, … R k; If Q avg R m, drop d m packets (e.g. d m = 2 * m) min th M ax th R1R1 R2R2 RkRk
21
21 Self-adjusting CHOKe
22
22 A Fluid Analysis discards from the queue permeable tube with leakage
23
23 Some notation N: total number of packets in the buffer L i (t): rate at which flow is packets cross position t of buffer 0 = entrance and D = exit p i : fraction of flow is packets dropped at ingress = fraction of flow is packets dropped in buffer (since drops occur in pairs) i : rate at which flow i packets arrive
24
24 The Equation L i (t) t - L i (t + t) t = i L i (t) t /N => - dL i (t)/dt = i L i (t) N L i (0) = i (1-p i ) L i (D) = i (1-2p i ) This first order differential equation can be solved explicitly for L i (t), 0 < t < D
25
25 Simulation Comparison: 1UDP, 32 TCPs
26
26 Fluid Analysis of Multiple Samples With M samples L i (t) t - L i (t + t) t = M i L i (t) t /N => - dL i (t)/dt = M i L i (t) N L i (0) = i (1-p i ) M L i (D) = i (1-p i ) M - M i p i
27
27 Comparison: 1 UDP, 2 Samples
28
28 Size-based Schemes –drop decision based on the size of FIFO queue –e.g. RED History-based Schemes –keep a history of packet arrivals/drops to guide drop decision –e.g. SRED, RED with penalty box Content-based Schemes –drop decision based on the current content of the FIFO queue –e.g. CHOKe Overview of packet dropping schemes Unfortunately, while the above are simple, they cant perform like WFQ in terms of accurate bandwidth partitioning –need more information to drop packets better AFD: Approximate Fair Dropping
29
29 Track the last N arriving packets Of these, let m i come from flow i –note: were tracking all packets sent, not just those admitted Use this to drop further flow i packets fairly –that is, when a flow i packet arrives, find D i such that m i (1-D i ) = m fair (fair share) if D i is positive, drop incoming packet with probability D i else, just admit packet Main idea of AFD
30
30 The fair share is estimated dynamically by looking at the size of the queue –m fair = m fair - a*(Q len – Q ref ) Q len is the real queue length (measured) Q ref is the reference queue length (set by the operator) a is the averaging parameter (a design parameter), could be self adjusting Main idea of AFD (contd)
31
31 How to efficiently track m i –If the number of flows is small (in the order of thousands): directly measure –Otherwise: (the number of flows is in the order of millions) use a flow table which has summary information Key design issue
32
32 Fraction of Flows State requirement on the order of # of large flows Flow table: How many flows to track?
33
33 4 1 24 Data Buffer Flow Table: N=11 5 1 24 Need to decrement so that m i accurately estimates flow arrival rate -- simply decrementing a random counter wont work -- too much bias against large flows DiDi Flow table
34
34 Choose a flow randomly from the flow table –Prob(flow i chosen) = 1/N f (N f : # of flows) Once a flow is chosen, decrement it multiple times proportionally to its count –N d = b*m i Decrementing
35
35 All TCPs maximum window size = 300 FIFO buffer size = 300 packets All packets sizes = 1 Kbyte (variable packet sizes also easy to implement) N = 500, b = 0.06 AFD performance 7 groups with 5 flows in each: 3 groups with different aimd parameters: 1.0/0.9, 0.75/0.30, 2.0/0.5 2 groups with different binomial parameters 1 group with long RTT 70ms 1 group of normal TCP (1.0/0.5)
36
36
37
37 Throughput vs time
38
38 Drop Probabilities (note differential dropping)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.