Download presentation
Presentation is loading. Please wait.
1
reddy@ee.tamu.edu Texas A & M University1 Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili ZhaoA.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University reddy@ee.tamu.edu June 23 2004, ICC
2
reddy@ee.tamu.edu Texas A & M University2 Agenda Motivation Performance Evaluation Results & Analysis Discussion
3
reddy@ee.tamu.edu Texas A & M University3 Current Network Workload Traffic composition in current network –~60% Long-term TCP (LTRFs), ~30% Short- term TCP (STFs), ~10% Long-term UDP (LTNRFs) Nonresponsive traffic is increasing –STF + LTNRF Link capacities are increasing What is the consequence?
4
reddy@ee.tamu.edu Texas A & M University4 The Trends Long-term UDP traffic increases –Multimedia applications –Impact on TCP applications from the non- responsive UDP traffic UDP arrival rate UDP Goodput TCP Goodput
5
reddy@ee.tamu.edu Texas A & M University5 The Trends (cont’d) Link capacity increases –Larger buffer memory required if current rules followed (buffer = BW * delay product) Increasing queuing delay Larger memories constrain router speeds What if smaller buffers used in the future?
6
reddy@ee.tamu.edu Texas A & M University6 Overview of Paper Study buffer management policies in the light of –Increasing Non-responsive loads –Increasing link speeds Policies studied –Droptail –RED –RED with ECN
7
reddy@ee.tamu.edu Texas A & M University7 Queue Management Schemes RED RED-ECN (RED w/ ECN enabled) Droptail P P max Min th 1 Max th AvgQlen P1P1 Q1Q1 0
8
reddy@ee.tamu.edu Texas A & M University8 Agenda Motivations Performance Evaluation Results & Analysis Discussion
9
reddy@ee.tamu.edu Texas A & M University9 Performance Evaluation Different workloads w/ higher non- responsive loads: 60% Different link capacities: 5Mb, 35Mb, 100Mb Different buffer sizes: 1/3 or 1 or 3 * 1 BWDP * Buffer size is in the unit of packet (1 packet = 1000 bytes) Multiple of BWDP Link Capacity (Mb) 535100 1/325200500 1755001500 322515004500
10
reddy@ee.tamu.edu Texas A & M University10 Workload Characteristics TCP(FTP): LTRFs UDP(CBR): LTNRFs –60%, 55%, 30% –1Mbps or 0.5Mbps Short-term TCP: STFs –0%, 5%, 30% –10packets/10s on average
11
reddy@ee.tamu.edu Texas A & M University11 Number of flows under 35Mb link contributing to 60% non-responsive load * Each LTRNF sends at 1Mbps * Numbers of flows under 5Mb and 100Mb links are scaled accordingly Workload Characteristics (cont’d) STF Load 35 Mb Link # of LTRFs# of STFs# of LTNRFs 0%55022 5%5525022 30%55130014
12
reddy@ee.tamu.edu Texas A & M University12 Performance Metrics Realized TCP throughput Average queuing delay Link utilization Standard deviation of queuing delay
13
reddy@ee.tamu.edu Texas A & M University13 Simulation Setup Simulation Topology R1R2 TCPs CBRs TCP Sinks CBR Sinks RED/DT, T p =50ms
14
reddy@ee.tamu.edu Texas A & M University14 Link Characteristics Capacities between R1 and R2: 5Mb, 35Mb, 100Mb Total round-trip propagation delay: 120ms Queue management schemes deployed between R1 and R2: RED/RED-ECN/ Droptail
15
reddy@ee.tamu.edu Texas A & M University15 Agenda Motivations Performance Evaluation Simulation Setup Results & Analysis Discussion
16
reddy@ee.tamu.edu Texas A & M University16 Sets of Simulations Changing buffer sizes Changing link capacities Changing STF loads
17
reddy@ee.tamu.edu Texas A & M University17 Set 1: Changing Buffer Sizes Correlation between average queuing delay & BWDP DropTail RED/RED-ECN
18
reddy@ee.tamu.edu Texas A & M University18 Realized TCP Throughput 30% STF load –Changing buffer size from 1/3 to 3 BWDPs 5Mb Link 100Mb Link
19
reddy@ee.tamu.edu Texas A & M University19 Realized TCP Throughput (cont’d) TCP Throughput higher with DropTail Difference decreases with larger buffer sizes Avg. Qdelay from REDs much smaller than that from Droptail RED-ECN marginally improves throughput over RED
20
reddy@ee.tamu.edu Texas A & M University20 Link Utilization 30% STF load Droptail has higher utilization with smaller buffers Difference decreases with larger buffers Multiple of BWDP 5Mb Link35Mb Link 100Mb Link RED RED- ECN DTRED RED- ECN DTRED RED- ECN DT 1/3.943.947.974.961.955.968.967.959.971 1.963.965.975.967.971.972 3.973.976.969.970.972.973
21
reddy@ee.tamu.edu Texas A & M University21 Std. Dev. Of Queuing Delay 30% STF + 30% ON/OFF LTNRF load 5Mb Link100Mb Link
22
reddy@ee.tamu.edu Texas A & M University22 Std. Dev. Of Queuing Delay (cont’d) Droptail has comparable deviation at 5Mb link capacity REDs have less deviation under higher buffer sizes and higher bandwidths REDs are more suitable for jitter sensitive applications
23
reddy@ee.tamu.edu Texas A & M University23 Set 2: Changing Link Capacities 30% STF load Relative Avg Queuing Delay = Avg Queuing Delay/RT Propagation Delay ECN DisabledECN Enabled
24
reddy@ee.tamu.edu Texas A & M University24 Relative Avg Queuing Delay Droptail has Relative Avg Queuing Delay close to the buffer size (x * BWDP) REDs has significantly smaller Avg Queuing Delay (~1/3 of DropTail) Changing link capacities have almost no impact
25
reddy@ee.tamu.edu Texas A & M University25 Drop/Marking Rate 30% STF load, 1 BWDP 1 Format: Drop Rate 2 Format: Drop Rate/Marking Rate QM Type of Flow Link Capacity (Mb) 535100 RED LTRF 1.03627.03112.02503 LTNRF.03681.03891.02814 RED- ECN LTRF 2.00352/.042560/.041230/.03036 LTNRF.04688.05352.03406 DT LTRF 1.01787.01992.01662 LTNRF.10229.09954.12189
26
reddy@ee.tamu.edu Texas A & M University26 Set 3: Changing STF Loads 1 BWDP Normalized TCP throughput = TCP throughput / (UDP+TCP) throughput ECN Disabled ECN Enabled
27
reddy@ee.tamu.edu Texas A & M University27 Comparison of Throughputs STF throughputs are almost constant over 3 queue management schemes Difference of TCP throughputs decreases while STF load increases STF Load REDRED-ECN DT LTRFSTFLTNRFLTRFSTFLTNRFLTRFSTFLTNRF 0%.5050.461.5070.458.7300.238 5%.457.051.460.051.456.729.051.190 30%.454.272.244.457.271.242.478.272.220
28
reddy@ee.tamu.edu Texas A & M University28 Agenda Motivations Performance Evaluation Simulation Setup Results & Analysis Discussion
29
reddy@ee.tamu.edu Texas A & M University29 Discussion Performance metrics of REDs comparable to or prevailing over DT w/ the existence of STF load and in high BWDP cases Marginal improvement of long-term TCP throughput from RED-ECN with TCP-Sack compared to RED
30
reddy@ee.tamu.edu Texas A & M University30 Discussion (cont’d) Minor impact on Avg Queuing Delay or TCP throughput by changing either link capacities or STF loads With the existence of STFs: BWDPChoose?TCP ThroughputAvg QDelay & Jitter << 1 BWDP (small bw/buffer, low-delay link) DroptailBetterComparable >= 1 BWDP (large bw/buffer, high-delay link) RED/RED- ECN ComparableSignificantly lower
31
reddy@ee.tamu.edu Texas A & M University31 Thank you June, 2004
32
reddy@ee.tamu.edu Texas A & M University32 Related Work S. Floyd et. al. “Internet needs better models” C. Diot et. al. “Aggregated Traffic Performance with Active Queue Management and Drop from Tail” & “Reasons not to deploy RED” K. Jeffay et. al. “Tuning RED for Web Traffic”
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.