Presentation is loading. Please wait.

Presentation is loading. Please wait.

QoS in The Internet: Scheduling Algorithms and Active Queue Management

Similar presentations


Presentation on theme: "QoS in The Internet: Scheduling Algorithms and Active Queue Management"— Presentation transcript:

1 QoS in The Internet: Scheduling Algorithms and Active Queue Management

2 Principles for QOS Guarantees
Consider a phone application at 1Mbps and an FTP application sharing a 1.5 Mbps link. bursts of FTP can congest the router and cause audio packets to be dropped. want to give priority to audio over FTP PRINCIPLE 1: Marking of packets is needed for router to distinguish between different classes; and new router policy to treat packets accordingly

3 Principles for QOS Guarantees (more)
Applications misbehave (audio sends packets at a rate higher than 1Mbps assumed above); PRINCIPLE 2: provide protection (isolation) for one class from other classes (Fairness)

4 QoS Metrics What are we trying to control?
Four metrics are used to describe a packet’s transmission through a network – Bandwidth, Delay, Jitter, and Loss Using a pipe analogy, then for each packet: Bandwidth is the perceived width of the pipe Delay is the perceived length of the pipe Jitter is the perceived variation in the length of the pipe Loss is the perceived leakiness if the pipe Bandwidth The path as perceived by a packet! A B Delay

5 Internet QoS Overview Integrated services Differentiated Services MPLS
Traffic Engineering

6 QoS: State information
No State Vs. Soft State Vs. Hard State No State IP Circuit Switched ATM Intserv/ RSVP diffserv Dedicated Hard State Soft No State inside the network Flow information at the edges Packet Switched

7 QoS Router Policer Policer Policer Classifier Classifier
Queue management Policer Per-flow Queue Scheduler Classifier shaper Policer Per-flow Queue Policer Classifier Per-flow Queue Scheduler shaper Per-flow Queue

8 Class based scheduling
Queuing Disciplines First come first serve Class 1 Class 2 Class 3 Class 4 Class based scheduling Scheduler flow 1 flow 2 flow n Classifier Buffer management

9 DiffServ DiffServ Domain Classification / Conditioning PHB Premium
Gold Silver Bronze PHB LLQ/WRED Classification / Conditioning

10 Functionality at DiffServ Routers

11 Differentiated Service (DS) Field
5 6 7 DS Field 4 8 16 19 31 Version HLen TOS Length Identification Flags Fragment offset TTL Protocol Header checksum IP header Source address Destination address Data DS filed reuse the first 6 bits from the former Type of Service (TOS) byte to determine the PHB

12 Integrated Services RSVP and Traffic Flow example
A RESV message containing a flowspec and a filterspec must be sent in the exact reverse path. The flowspec (T-spec/R-spec) defines the QoS and the traffic characteristics being requested. Phop = R1 Phop = A A A RESV B PATH R2 B PATH R1 B PATH Phop = R2 Reserved buffer and bw A RESV A RESV A RESV B PATH R3 B Reserved buffer and bw Reserved buffer and bw Data B Data Routers enforce MF classification and put packets in the appropriate queue. The scheduler will then serve these queues. R4 Admission/policy control determines if the node has sufficient available resources to handle the request. If request is granted, bandwidth and buffer is allocated. PATH message will leave the IP address of the previous hop node in each router. Contains Sender Tspec, Sender Temp, Adspec. RSVP maintains soft state information (DstAddr, Protocol, DstPort) in the routers. All packets will get MF classification treatment and put in the appropriate queue.

13 IntServ: Per-flow classification
Receiver Sender

14 Per-flow buffer management
Receiver Sender

15 Per-flow scheduling Receiver Sender

16 Round Robin (RR) A: B: C: Round #1 Round #2 … RR avoids starvation
All sessions have the same weight and the same packet length: A: B: C: Round #1 Round #2

17 RR with variable packet length
Round #1 Round #2 But the Weights are equal !!!

18 Solution… A: B: C: #1 #2 #3 #4

19 Weighted Round Robin (WRR)
WA=3 WB=1 WC=4 #1 #2 round length = 8

20 WRR – non Integer weights
WA=1.4 WB=0.2 WC=0.8 WA= WB=1 WC=4 Normalize round length = 13

21 Weighted round robin Serve a packet from each non-empty queue in turn
Can provide protection against starvation It is easy to implement in hardware Unfair if packets are of different length or weights are not equal What is the Solution? Different weights, fixed packet size serve more than one packet per visit, after normalizing to obtain integer weights

22 Problems with Weighted Round Robin
Different weights, variable size packets normalize weights by mean packet size e.g. weights {0.5, 0.75, 1.0}, mean packet sizes {50, 500, 1500} normalize weights: {0.5/50, 0.75/500, 1.0/1500} = { 0.01, , }, normalize again {60, 9, 4} With variable size packets, need to know mean packet size in advance Fairness is only provided at time scales larger than the schedule

23 Max-Min Fairness An allocation is fair if it satisfies max-min fairness each connection gets no more than what it wants the excess, if any, is equally shared

24 Max-Min Fairness A common way to allocate flows
N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). Pick the flow, f, with the smallest requested rate. If W(f) < C/N, then set R(f) = W(f). If W(f) > C/N, then set R(f) = C/N. Set N = N – 1. C = C – R(f). If N>0 goto 1.

25 Max-Min Fairness An example
W(f1) = 0.1 1 W(f2) = 0.5 C R1 W(f3) = 10 W(f4) = 5 Round 1: Set R(f1) = 0.1 Round 2: Set R(f2) = 0.9/3 = 0.3 Round 3: Set R(f4) = 0.6/2 = 0.3 Round 4: Set R(f3) = 0.3/1 = 0.3

26 Fair Queueing Flow 1 Flow N
Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”. FIFOs are scheduled one bit at a time, in a round-robin fashion. This is called Bit-by-Bit Fair Queueing. Flow 1 Bit-by-bit round robin Classification Scheduling Flow N

27 Weighted Bit-by-Bit Fair Queueing
Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round. 1 R(f1) = 0.1 R(f3) = 0.3 R1 C R(f4) = 0.3 R(f2) = 0.3 Order of service for the four queues: … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Also called “Generalized Processor Sharing (GPS)”

28 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 Time 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 A2 = 2 C3 = 2 Time Weights : 3:2:2:1 Round 1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 A2 = 2 C3 = 2 D1, C2, C1 Depart at R=1 Time C1 C2 D1 Weights : 3:2:2:1 Round 1

29 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A2 = 2 C3 = 2 B1, A A1 Depart at R=2 Time A1 B1 C1 C2 D1 A2 Round 2 Round 1 Weights : 3:2:2:1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A2 = 2 C3 = 2 D2, C3 Depart at R=2 Time A1 B1 C1 C2 D1 A2 C3 D2 Round 1 Round 2 Weights : 3:2:2:1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C3 = 2 C1 = 1 C1 C2 D1 A1 A2 B1 B 1 A2 = 2 C3 D2 Departure order for packet by packet WFQ: Sort by finish time of packets Time Sort packets Weights : 1:1:1:1

30 Packetized Weighted Fair Queueing (WFQ)
Problem: We need to serve a whole packet at a time. Solution: Determine what time a packet, p, would complete if we served it flows bit-by-bit. Call this the packet’s finishing time, Fp. Serve packets in the order of increasing finishing time. Also called “Packetized Generalized Processor Sharing (PGPS)”

31 WFQ is complex There may be hundreds to millions of flows; the linecard needs to manage a FIFO queue per each flow. The finishing time must be calculated for each arriving packet, Packets must be sorted by their departure time. Most efforts in QoS scheduling algorithms is to come up with practical algorithms that can approximate WFQ! Egress linecard 1 2 Packets arriving to egress linecard Calculate Fp Find Smallest Fp Departing packet 3 N

32 When can we Guarantee Delays?
Theorem If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.

33 Traffic Managers: Active Queue Management Algorithms

34 Queuing Disciplines Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space: Bandwidth: which packet to serve (transmit) next - This is scheduling Buffer space: which packet to drop next (when required) – this buffer management Queuing affects the delay of a packet (QoS)

35 Queuing Disciplines Scheduling Buffer Management Traffic Sources
Class C Class B Class A Classes Drop Scheduling Buffer Management

36 Active Queue Management
Advantages Reduce packet losses (due to queue overflow) Reduce queuing delay Queue Sink Outbound Link Router Inbound Link TCP Queue Sink Outbound Link Router Inbound Link TCP AQM Queue Sink Outbound Link Router Inbound Link TCP Queue Sink Outbound Link Router Inbound Link TCP AQM Queue Sink Outbound Link Router Inbound Link TCP Queue Sink Outbound Link Router Inbound Link TCP ACK… Drop!!! ACK… ACK… Congestion Notification… ACK… Congestion n

37 QoS Router Policer Policer Policer Classifier Classifier
Queue management Policer Per-flow Queue Scheduler Classifier shaper Policer Per-flow Queue Policer Classifier Per-flow Queue Scheduler shaper Per-flow Queue

38 Packet Drop Dimensions
Aggregation Single class Per-connection state Class-based queuing Drop position Tail Head Random location Overflow drop Early drop

39 Typical Internet Queuing
FIFO + drop-tail Simplest choice Used widely in the Internet FIFO (first-in-first-out) Implies single class of traffic Drop-tail Arriving packets get dropped when queue is full regardless of flow or importance Important distinction: FIFO: scheduling discipline Drop-tail: drop policy (buffer management)

40 FIFO + Drop-tail Problems
FIFO Issues: (irrespective of the aggregation level) No isolation between flows: full burden on e2e control (e..g., TCP) No policing: send more packets  get more service Drop-tail issues: Routers are forced to have have large queues to maintain high utilizations Larger buffers => larger steady state queues/delays Synchronization: end hosts react to the same events because packets tend to be lost in bursts Lock-out: a side effect of burstiness and synchronization is that a few flows can monopolize queue space

41 Synchronization Problem
Because of Congestion Avoidance in TCP cwnd Time RTT 1 2 4 Slow Start W* W W+1 Congestion Avoidance W*/2

42 Synchronization Problem
All TCP connections reduce their transmission rate on crossing over the maximum queue size. The TCP connections increase their tx rate using the slow start and congestion avoidance. The TCP connections reduce their tx rate again. It makes the network traffic fluctuate. Total Queue Queue Size Time

43 Global Synchronization Problem
Max Queue Length Can result in very low throughput during periods of congestion

44 Global Synchronization Problem
TCP Congestion control Synchronization: leads to bandwidth under-utilization Persistently full queues: leads to large queueing delays Cannot provide (weighted) fairness to traffic flows – inherently proposed for responsive flows bottleneck rate Aggregate load Rate Flow 1 Flow 2 Time

45 Lock-out Problem Max Queue Length Lock-Out: In some situations tail drop allows a single connection or a few flows (misbehaving flows: UDP) to monopolize queue space, preventing other connections from getting room in the queue. This "lock-out" phenomenon is often the result of synchronization.

46 Bias Against Bursty Traffic
Max Queue Length During dropping, bursty traffic will be dropped in benchs – which is not fair for bursty connections

47 Active Queue Management Goals
Solve lock-out and full-queue problems No lock-out behavior No global synchronization No bias against bursty flow Provide better QoS at a router Low steady-state delay Lower packet dropping

48 RED (Random Early Detection)
FIFO scheduling Buffer management: Probabilistically discard packets Probability is computed as a function of average queue length Discard Probability 1 Average Queue Length min_th max_th queue_len

49 Random Early Detection (RED)

50 RED operation Min thresh Max thresh Average queue length P(drop) 1.0
MaxP Avg length minthresh maxthresh

51 RED (Random Early Detection)
FIFO scheduling Min thresh Max thresh Average queue length Admit the New Packet Case 1: Average Queue Length < Min. Thresh Value Make Use of Average Queue Length Define Two Threshold Values

52 RED (Cont’d) Min thresh Max thresh Average queue length p 1-p p 1-p
Or Drop the New Packet With Probability 1-p Case 2: Average Queue Length between Min. and Max. Threshold Value Admit the New Packet With Probability p…

53 Random Early Detection Algorithm
for each packet arrival: calculate the average queue size ave if ave ≤ minth do nothing else if minth ≤ ave ≤ maxth calculate drop probability p drop arriving packet with probability p else if maxth ≤ ave drop the arriving packet ave = (1 – wq)ave + wqq P = max_P*(avg_len – min_th)/(max_th – min_th)

54 Random early detection (RED) packet drop
Average queue length Drop probability Max queue length Forced drop Max threshold Probabilistic early drop Min threshold No drop Time

55 Active Queue Management Random Early Detection (RED)
Drop probability Time Max Queue Size Average queue length Max Threshold Min Threshold Forced drop Probabilistic drops No drops Weighted average accommodates bursty traffic Probabilistic drops avoid consecutive drops drops proportional to bandwidth utilization (drop rate equal for all flows)

56 RED Vulnerable to Misbehaving Flows
200 400 600 800 1,000 1,200 1,400 UDP blast FIFO RED TCP Throughput (Kbytes/Sec) 10 20 30 40 50 60 70 80 90 100 Time (seconds)

57 Effectiveness of RED - Lock-Out & Global Synchronization
Packets are randomly dropped Each flow has the same probability of being discarded

58 Effectiveness of RED - Full-Queue & Bias against bursty traffic
Drop packets probabilistically in anticipation of congestion Not when queue is full Use qavg to decide packet dropping probability : allow instantaneous bursts

59 What QoS does RED Provide?
Lower buffer delay: good interactive service qavg is controlled to be small Given responsive flows: packet dropping is reduced Early congestion indication allows traffic to throttle back before congestion RED provide small delay, small packet loss, and high throughput (when it has responsive flows).

60 Weighted RED (WRED) WRED provides separate thresholds and weights for different IP precedences, allowing us to provide different quality of service to different traffic Lower priority class traffic may be dropped more frequently than higher priority traffic during periods of congestion

61 WRED (Cont..) High Priority traffic Medium Priority traffic
Low Priority traffic Random Dropping

62 Congestion Avoidance: Weighted Random Early Detection (WRED)
Adds Per-Class Queue Thresholds for Differential Treatment Two Classes are Shown; Any number of classes Can Be Defined Probability of Packet Discard Average Queue Depth Standard Minimum Threshold Premium Minimum Threshold Std and Pre Maximum Threshold

63 Problems with (W)RED – unresponsive flows

64 Vulnerability to Misbehaving Flows
TCP performance on a 10 Mbps link under RED in the face of a “UDP” blast

65 Vulnerability to Misbehaving Flows
Try to look at the following example: Assume there is a network which is set up as: UDP sources R1 R2 S(m) S(1) S(m+1) S(m+n) 10Mbps 100Mbps TCP sources

66 Vulnerability to Misbehaving Flows

67 Vulnerability to Misbehaving Flows
Queue Size versus Time RED: Queue Size Delay is bounded Global Synchronization solved

68 Unresponsive Flow (such as UDP)
Unfairness of RED An unresponsive flow occupies over 95% of bandwidth Unresponsive Flow (such as UDP) 32 TCP Flows 1 UDP Flow

69 Scheduling & Queue Management
What routers want to do? Isolate unresponsive flows (e.g., UDP) Provide Quality of Service to all users Two ways to do it Scheduling algorithms: e.g., WFQ, WRR Queue management algorithms: e.g., RED, FRED, SRED

70 The setup and problems In a congested network with many users
QoS requirements are different Problem: Allocate bandwidth fairly

71 Approach 1: Network-Centric
Network node: Weighted Fair Queueing (WFQ) User traffic: any type Problem: complex implementation lots of work per flow

72 Approach 2: User-Centric
Network node: simple FIFO buffer; active queue management (AQM): RED User traffic: congestion-aware (e.g. TCP) Problem: requires user cooperation

73 Current Trend Network node: User traffic: any type simple FIFO buffer
AQM schemes with enhancement to provide fairness: preferential dropping packets User traffic: any type

74 Packet Dropping Schemes
Size-based Schemes drop decision based on the size of FIFO queue e.g. RED Content-based Schemes drop decision based on the current content of the FIFO queue e.g. CHOKe History-based Schemes keep a history of packet arrivals/drops to guide drop decision e.g. SRED, RED with penalty box, AFD

75 CHOKe (no state information)

76 Random Sampling from Queue
A randomly chosen packet more likely from the unresponsive flow Unresponsive flows can’t fool the system

77 Comparison of Flow ID Compare the flow id with the incoming packet
More accurate Reduce the chance of dropping packets from a TCP-friendly flows

78 Dropping Mechanism Drop packets (both incoming and matching samples)
More arrival  More Drop Give users a disincentive to send more

79 Average Queue Length < Min. Thresh Value
CHOKe (Cont’d) Min thresh Max thresh Average queue length Admit the New Packet Case 1: Average Queue Length < Min. Thresh Value

80 CHOKe (Cont’d) Min thresh Max thresh Average queue length p 1-p
If they are from the same flow, both packets will be dropped If they are from different flows, the same logic in RED applies A packet is randomly chosen from the queue to compare with the new arrival packet Case 2: Avg. Queue Length is between Min. and Max. Threshold Values

81 CHOKe (Cont’d) Min thresh Max thresh Average queue length
If they are from the same flow, both packets will be dropped If they are from different flows, the new packet will be dropped A random packet will be chosen for comparison Case 3: Avg. Queue Length > Max. Threshold Value

82 Simulation Setup

83 Network Setup Parameters
32 TCP flows, 1 UDP flow All TCP’s maximum window size = 300 All links have a propagation delay of 1ms FIFO buffer size = 300 packets All packets sizes = 1KByte RED: (minth, maxth) = (100,200) packets

84 32 TCP, 1 UDP (one sample)

85 32 TCP, 5 UDP (5 samples)

86 How Many Samples to Take?
Different samples for different Qlenavg # samples decrease when Qlenavg close to minth # samples increase when Qlenavg close to maxth

87 32 TCP, 5 UDP (self-adjusting)

88 Two Problems of CHOKe Problem I: Problem II:
Unfairness among UDP flows of different rates Problem II: Difficulty in choosing automatically how many to drop

89 SAC (Self Adjustable CHOKe
Tries to Solve the previously mentioned two problems

90 SAC Problem 1: Unfairness among UDP flows of different rates (e.g., when k =1, the UDP flow 31 (6 Mbps) has 1/3 throughput of UDP flow 32 (1 Mbps), and when k =10 , throughput of UDP flow 31 is almost 0).

91 SAC Problem 2: Difficulty in choosing automatically how many to drop (when k = 4, UDPs occupy most of the BW. When k =10, relatively good fair sharing, and when k = 20, TCPs get most of the BW).

92 SAC Solutions: Search from the tail of the queue for a packet with the same flow number and drop this packet instead of random dropping – because the higher a flow rate is, the more likely its packets will gather at the rear of the queue. The queue occupancy will be more evenly distributed among the flows. 2. Automate the process of determining k according to traffic status (number of active flows and number of UDP flows)

93 SAC Once an incoming UDP is compared with a randomly selected packet, if they are of the same flow, P is updated in this way: P  (1-wp) P + wp. If they are of different flows, P is updated as follows: P  (1-wp) P . If P is small, then there are more competing flows, and we should increase the value of k. Once there is an incoming packet, if it is a UDP packet, R is updated in this way: R  (1-wr) R+ wr.. If it is a TCP packet, R is updated as follows: R  (1-wr) R. If R is large, then we have a large amount of UDP traffic, and we should increase k to drop more UDP packets.

94 SAC simulation Throughput per flow (30 TCP flows and 2 UDP flows of different rate)

95 SAC simulation Throughput per flow (30 TCP flows and 4 UDP flows of the same rate).

96 SAC simulation Throughput per flow (20 TCP flows and 4 UDP flows of different rates)

97 AQM Using “Partial” state information

98 Congestion Management and Avoidance: Goal
Provide fair bandwidth allocation similar to WFQ Be simple to implement like RED WFQ Ideal Fairness RED Simplicity

99 AQM Based on Capture Recapture
Objective: achieve fairness close to that of max-min fairness If W(f) < C/N, then set R(f) = W(f). If W(f) > C/N, then set R(f) = C/N. Formulation: Ri: the sending rate of flow i Di: the drop probability of flow i Ideally, we want Ri (1 – Di) = Rfair (equal share) Di = (1 – Rfair/Ri)+ (That is, drop the excess)

100 AQM Based on Capture-Recapture:
The key question is: how to estimate the sending rate (Ri) and the fair share (Rfair) !!! Active Queue Management Fair allocation of BW Incoming packets The estimation of the sending rate The adjustment mechanism The estimation of the fair share

101 Capture-Recapture Models
The CR models were originally developed for estimating demographic parameters of animal populations (e.g., population size, number of species, etc.). It is an extremely useful method where inspecting the whole state space is infeasible or very costly Numerous models have been developed to various situations The CR models are being used in many diverse fields ranging from software inspection to epidemiology. It is based on several key ideas: animals are captured randomly, marked, released and then recaptured randomly from the population.

102

103

104 Time is then allowed for the marked individuals to mix with the unmarked individuals.

105

106 Then another sample is captured.

107

108 Capture-Recapture Model
Unknown number of fish in a lake Catch a sample and mark them Let them loose Recapture a sample and look for marks Estimate population size     n1 = number in first sample 15     n2 = number in second sample 10     n12 = number in both samples 5     N = total population size assume that     n1/N = n12/n2    therefore    15/N = 5/10      N = (10 x 15) / 5 = 30

109 Capture-Recapture Models
Simple model: estimate a homogeneous population of animals (N): n1 animals are captured (marked) n2 animals were recaptured, and m2 of these appeared to be marked. Under this simple capture recapture model (M0): m2/n2 = n1/N N n2 N n1 n2 N n1 m2

110 Capture-Recapture Models
The capture probability refers to the chance that an individual animal get caught. M0 implies that the capture probability for all animals are the same. ‘0’ refers to constant capture probability Using the Mh model, the capture probabilities vary by animal, sometimes for reasons like difference in species, sex, or age, etc.. ‘h’ refers to heterogeneity.

111 Capture-Recapture Models
Estimation of N under the Mh Model is based on the capture frequency data f1, f2…, and ft (t captures) f1 is the number of animals that were caught only once, f2 is the number of animals that were caught only twice, … etc. The jackknife estimator of N is computed as a linear combination of these capture frequencies, s.t.: N = a1f1 + a2f2 + … + atft where ai are coefficients which are a function of t.

112 AQM Based on Capture-Recapture
The key question is: how to estimate the sending rate (Ri) and the fair share (Rfair) !!! We use an arrival buffer to store the recently arrived packet headers (we can have control over how large the buffer is, and is a better representation of the nature of the flows when compared to the sending buffer): We estimate Ri using the M0 capture-recapture model We estimate Rfair using the Mh capture-recapture model (by estimating the number of active flows).

113 AQM Based on Capture-Recapture
Ri is estimated for every arriving packet (we can increase the accuracy by having multiple captures, or decrease it by capturing packets periodically) If the arrival buffer is of size B, and the number of captured packets is Ci, then Ri = R Ci/B where R is the aggregate arrival rate Rfair may not change every single time slot (as a result, the capturing and the calculation of the number of active flows could be done independently of the arrival of each incoming packet) Rfair = R/(number of active flows) The capture-recapture model gives us a lot of flexibility in terms of accuracy vs. complexity The same capture-recapture can be used for calculating both Ri and Rfair

114 AQM Based on Capture-Recapture
Active Queue Management (Capture-Recapture) Fair allocation of BW The estimation of Ri by the M0 model Di = (1 – Rfair/Ri)+ Incoming packets The estimation of Rfair by the Mh CR model

115 Performance evaluation
This is a classical setup that researchers use to evaluate AQM schemes (we can vary many parameters, responsive vs. non-responsive connections, the nature of responsiveness, link delays, etc.) S(1) S(1) 100Mbps 100Mbps TCP sources TCP sources S(m) R1 10Mbps R2 S(m) S(m+1) S(m+1) UDP sources UDP sources S(m+n) S(m+n)

116 Performance evaluation
Estimation of the number of flows

117 Performance evaluation
Bandwidth allocation comparison between CAP and RED

118 Performance evaluation
Bandwidth allocation comparison between CAP and SRED

119 Performance evaluation
Bandwidth allocation comparison between CAP and RED-PD

120 Performance evaluation
Bandwidth allocation comparison between CAP and SFB

121 Normalized Measure of Performance
A single comparison of the fairness using a normalized value, where norm is defined as: where bi is ideal fair share, bj is the bandwidth received by each flow Thus, ||BW|| = 0 for the ideal fair sharing

122 Normalized Measure of Performance

123 Performance Evaluation: Variable amount of unresponsiveness


Download ppt "QoS in The Internet: Scheduling Algorithms and Active Queue Management"

Similar presentations


Ads by Google