The War Between Mice and Elephants

Slides:



Advertisements
Similar presentations
CSIT560 Internet Infrastructure: Switches and Routers Active Queue Management Presented By: Gary Po, Henry Hui and Kenny Chong.
Advertisements

 Liang Guo  Ibrahim Matta  Computer Science Department  Boston University  Presented by:  Chris Gianfrancesco and Rick Skowyra.
ECE 4450:427/527 - Computer Networks Spring 2015
Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross- Layer Information Awareness Xin Yu Department Of Computer Science New York University,
The War Between Mice and Elephants LIANG GUO, IBRAHIM MATTA Computer Science Department Boston University ICNP (International Conference on Network Protocols)
The War Between Mice and Elephants By Liang Guo & Ibrahim Matta In Proceedings of ICNP'2001: The 9th IEEE International Conference on Network Protocols,
Worcester Polytechnic Institute The War Between Mice and Elephants Liang Guo, Ibrahim Matta Presented by Vasilios Mitrokostas for CS 577 / EE 537 Images.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
The War Between Mice and Elephants Liang Guo and Ibrahim Matta Boston University ICNP 2001 Presented by Thangam Seenivasan 1.
The War Between Mice and Elephants Presented By Eric Wang Liang Guo and Ibrahim Matta Boston University ICNP
Mice and Elephants1 The War Between Mice and Elephants Liang Guo and Ibrahim Matta Computer Science Department Boston University 9th IEEE International.
1 Minseok Kwon and Sonia Fahmy Department of Computer Sciences Purdue University {kwonm, TCP Increase/Decrease.
Traffic Sensitive Active Queue Management - Mark Claypool, Robert Kinicki, Abhishek Kumar Dept. of Computer Science Worcester Polytechnic Institute Presenter.
Advanced Computer Networks - Mice and Elephants Paper1 The War Between Mice and Elephants Liang Guo and Ibrahim Matta Computer Science Department Boston.
A TCP With Guaranteed Performance in Networks with Dynamic Congestion and Random Wireless Losses Stefan Schmid, ETH Zurich Roger Wattenhofer, ETH Zurich.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Data Communication and Networks
The War Between Mice and Elephants By Liang Guo (Graduate Student) Ibrahim Matta (Professor) Boston University ICNP’2001 Presented By Preeti Phadnis.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
1 The War Between Mice and Elephants (by Liang Guo and Ibrahim Matta) Treating Short Connections fairly against Long Connections when they compete for.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 On Class-based Isolation of UDP, Short-lived and Long-lived TCP Flows by Selma Yilmaz Ibrahim Matta Computer Science Department Boston University.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 A TCP Friendly Traffic Marker for IP Differentiated Services Feroz Azeem, Shiv Kalyanaraman,
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Improving TCP Performance over Wireless Networks
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Explicit Allocation of Best-Effort Service Goal: Allocate different rates to different users during congestion Can charge different prices to different.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Other Methods of Dealing with Congestion
Instructor Materials Chapter 6: Quality of Service
Corelite Architecture: Achieving Rated Weight Fairness
Internet Networking recitation #9
TCP Vegas: New Techniques for Congestion Detection and Avoidance
Topics discussed in this section:
Approaches towards congestion control
Queue Management Jennifer Rexford COS 461: Computer Networks
Chapter 3 outline 3.1 Transport-layer services
Generalizing The Network Performance Interference Problem
Ad-hoc Transport Layer Protocol (ATCP)
Open Issues in Router Buffer Sizing
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
ECE 4450:427/527 - Computer Networks Spring 2017
TCP in Mobile Ad-hoc Networks
IT351: Mobile & Wireless Computing
TCP in Wireless Ad-hoc Networks
Other Methods of Dealing with Congestion
Title: An Adaptive Queue Management Method for Congestion Avoidance in TCP/IP Networks Presented By: Frank Posluszny Vishal Phirke Matt Hartling 12/31/2018.
COMP/ELEC 429/556 Introduction to Computer Networks
Jiyong Park Seoul National University, Korea
Internet Networking recitation #10
RAP: Rate Adaptation Protocol
The War Between Mice & Elephants by, Matt Hartling & Sumit Kumbhar
CS4470 Computer Networking Protocols
Project-2 (20%) – DiffServ and TCP Congestion Control
Congestion Control Reasons:
TCP Congestion Control
EE 122: Differentiated Services
EE 122: Lecture 10 (Congestion Control)
Chapter 3 outline 3.1 Transport-layer services
Queueing Problem The performance of network systems rely on different delays. Propagation/processing/transmission/queueing delays Which delay is affected.
Presentation transcript:

The War Between Mice and Elephants Liang Guo, Ibrahim Matta Computer Science Department Boston University Boston, MA 02215 USA Daniel Courcy- daniel.courcy@hp.com Nathan Salemme- nate.salemme@hp.com

Outline Introduction Analyzing Short Flow Performance Scheme Architecture and Mechanisms Simulation Discussion Conclusion Worcester Polytechnic Institute

References Most figures, tables and graphs in this presentation were gathered from the Journal paper: The War Between Mice and Elephants. Worcester Polytechnic Institute

Mice and Elephants? Most connections are short (mice) However, long connections (elephants) dominate Internet bandwidth Problem? Elephants hinder performance of short mice connections Long delays ensue Worcester Polytechnic Institute

TCP Factors TCP transport layer has properties that can cause this problem Sending window slowly ramped up starting at minimum Regardless of available bandwidth, always starts at minimum value Packet loss always determined by timeout Not enough time for duplicate ACKS Conservative ITO can have devastating affects if initial control packets are lost (SYN, SYN-ACK) Worcester Polytechnic Institute

*flow=connection=session* Proposed Solution These factors cause short flows to receive less than fair bandwidth Proposed plan- ‘Preferential treatment’ to short flows Differentiated Services Architecture (Diffserv) Core and Edge Routers (similar to CSFQ) Active Queue Management (AQM) Core Routers use RIO-PS Queue Edge Routers use threshold based classification No packet reordering! Simulations prove Better fairness and response time by giving treatment to short flows Goodput remains the same if not better *flow=connection=session* Worcester Polytechnic Institute

Related Work Worcester Polytechnic Institute Guo & Matta wrote “Differentiated Predictive Fair Service for TCP Flows” (2000) A Study of Interaction Between Short and Long Flows Propose to Isolate Short / Long Flows This Would Increase Response Time and Fairness for Short Flows Results Show Class-based Flow Isolation with Threshold-based Classification at the Edge may Cause Packet Reordering (Thus) Degradation of TCP Performance Difference: (Mice & Elephants) Bandwidth (Load) Control is Performed on the Edge Cardwell et al. Show in “Modeling TCP Latency” that Short TCP flows are Vulnerable Seddigh et al. Show Negative Impact of Initial Timeout Value (ITO) on Short TCP Flow Latency Propose to Reduce Value (G & M Test This Later On…) Many Proposed Solutions Attempt to Revise TCP Protocol This Study Places Control Inside Network (Fair to All) Corvella et al. & Bansal et al. Claim Size-aware Job Scheduling Helps to Enhance Response Time of Short Jobs (While Not Really Hurting the Performance of Long Jobs) Worcester Polytechnic Institute

Related Work G & M Propose Alternative use of Diffserv Architecture Diffserv = Framework; Allows for Classification & Differentiated Treatment G & M Want to Provide a New “better-than-best-effort” TCP Service that Attempts to Enhance the Competence of Short TCP Flows This Creates a More Fair Environment for Web Traffic They Classify In / Out to Distinguish Flow Length as Short / Long Worcester Polytechnic Institute

Analyzing Short Flow Performance Packet loss rate needs to be low for short connections Simulation of different packet size connections RTT 0.1 seconds RTO 0.4 seconds Default ITO 3 seconds Fixed Packet Size (for each connection) Vary loss rate Goal is to show how short flows become very sensitive as packet loss rate increases Worcester Polytechnic Institute

Sensitivity of Short Flows Short flows not sensitive at low loss rates Short flows very sensitive at high loss rates Long flows, loss rate simply grows exponentially Exponential growth as loss rate increases Linear growth as loss rate increases Worcester Polytechnic Institute

Sensitivity of Short Flows Covariance (C.O.V.) Ratio between standard deviation and the mean of a certain random variable Variance for short flows increase as loss rate increases Variance for long flows decrease as loss rate increases Long connections decrease in variance Short connections increase in variance Worcester Polytechnic Institute

Sensitivity of Short Flows Possible reasons for high covariance in short flows High congestion puts TCP into exponential backoff Resending packets in slow start vs. congestion avoidance will yield high variance Slow start=aggressive Congestion avoidance=conservative Law of Large Numbers; more variance for short connections Worcester Polytechnic Institute

Preferential Treatment of Short TCP Flows Simple Simulations Used to Show: Preferential Treatment of Short TCP Flows can Significantly Enhance Short Flow Transmission Time (w/ No Major Effects on Long Flows) G & M use ‘ns’ to Setup: 10 Long (10000-packet) TCP Flows 10 Short (100-packet) TCP Flows These Compete For Bandwidth Over 1.25 Mbps Link G & M Use Queue Management @ Bottleneck They Measure Instantaneous Bandwidth Shows Effects of Preferential Treatment Worcester Polytechnic Institute

Impact of Preferential Treatment Consider Drop Tail vs. RED vs. RIO-PS (RIO with Preferential Treatment) Drop Tail Fails to Give Fair Treatment Favors Aggressive Flows w/ Larger Windows RED Gives Almost Fair Treatment RIO-PS Queue Gives Short Flows > Than Fair Share Graph Shows Short Flows Under RIO-PS can Temporarily Steal Bandwidth In The Long Run a Short Flow’s Early Completion Returns Resources i.e. Long Flows can Better Adapt to Network State Giving Short Flows Preferential Treatment Does Not Impact LT Goodput Worcester Polytechnic Institute

Additional Notes on Preferential Treatment Preferential Treatment Might Enhance Long Flows (As Seen on Previous Slide) Helps Get Initial ‘Long Flow’ (Control) Packets Through via Threshold Short Flows w/ Less Drops Enhances Fairness Between Themselves Table 1 (Above): When Load is Low: RIO-PS & RED Slightly Lower Goodput When Load is Higher: RIO-PS Slightly Higher Goodput Therefore G & M Propose Diffserv-like Architecture Worcester Polytechnic Institute

Proposed Architecture Diffserv Architecture Edge Routers Core Routers Classify flows into short and long Marks packets accordingly Actively manage flows based on their class AQM policy implemented at core Worcester Polytechnic Institute

Edge Router Edge router sits on edge of network and is responsible for determining short and long flows Simple threshold counter used, Lt Connection < Lt, short flow Connection > Lt, long flow All connections start of short Lt is dynamic Connection is considered active until Tu expires SLR parameter used to balance short connections vs. long connections This ratio is updated every Tc time period through additive increase/decrease Chosen values Tu=1 second and Tc=10 seconds All flows at first are considered short Worcester Polytechnic Institute

Core Router: Preferential Treatment to Short Flows G& M Require Core Routers to Give Preferential Treatment to Short Flows Many Queue Policies Available Picked RIO (RED w/ In and Out) Conforms to Diffserv Other Advantages (Lets Bursty Traffic Through) Worcester Polytechnic Institute

RIO Operation of Queue In (Short) Packets not Affected by Out (Long) Packets Dropping / Marking Probability for Short Based Only on Average Backlog of Short Packets (Qshort) Dropping / Marking Probability for Long Based on Total Average Queue Size (Qtotal) Worcester Polytechnic Institute

Design Features of RIO G & M Discuss RIO Features Only a Single FIFO Queue is Used for all Packets Reordering Will not Happen Reordering Can Lead to Over-Estimation of Congestion RIO Inherits Features of RED Protection of Bursty Flows Fairness within Traffic Classes Detection of Incipient Congestion RIO-PS (RIO w/ Preferential Treatment for Short Flows) BW for Short Flows Determined by Parameters G & M Choose 75% Total Link Bandwidth In Times of Congestion Worcester Polytechnic Institute

Simulation G & M use ‘ns’ Simulator to Study: Performance vs. Drop Tail & RED Worcester Polytechnic Institute

Simulation Setup Assume Network Traffic Dominated by Web Traffic Model as Such: Randomly Selected Clients Start Sessions That Randomly Surf to Random Web Pages (of Different Sizes) Each Page may Have Several Objects Thus Each Requires a TCP Connection [HTTP 1.0 is Assumed] Client Request and Servers Respond w/ Acknowledgement and Remainder of Data (the Web Page) Load is Carefully Tuned to be Close to Bottleneck Link Capacity Worcester Polytechnic Institute

Simulation Topology Worcester Polytechnic Institute Traffic Largely Flows Right to Left Node 0 is Thus Entry Edge Router Nodes 1, 2, 3 are Core Routers Bottlenecks to Clients (1-3 and 2-3) Bottleneck Buffer Size and Queue Management Set to Maximize Power Power is Ratio Between Throughput and Latency High Power is Low Delay and High Throughput Again for RIO-PS Queue Short Flows are Set to Get about 75% of Total Bandwidth ECN is Turned ON (This Aids RED w/ Short Flows) ECN was Tried OFF  Witnessed Larger Performance Gains Worcester Polytechnic Institute

Experiment 1: Single Client Only one client set used in previous topology Parameters: Experiment run for 4000 seconds (first 2000 discarded) SLR set to 3 Drop-tail, RED and RIO-PS are subjects Response time recorded after each successful object download Worcester Polytechnic Institute

Experiment 1: Single Client Response Time (ITO=3) ITO set to 3 seconds Average response time determined to be 25-30% less than competition It is argued that 3 second ITO is too safe, so 1 second timer is tried 25-30% gap between RIO-PS and competition. Pretty good… Worcester Polytechnic Institute

Experiment 1: Single Client Response Time (ITO=1) ITO set to 1 seconds Potentially an unsafe setting if used in environment with long slow links and long propagation times Results show less gap between RIO-PS and competition Still good performance, might not be worth risk 15-25% gap between RIO-PS and competition. Not as good and more risky… Worcester Polytechnic Institute

Experiment 1: Single Client Queue Size (ITO=3) Instantaneous queue size for last 20 seconds using 3 second ITO High variation due to file size distribution RIO-PS relatively low compared to Drop-tail and RED Worcester Polytechnic Institute

Experiment 1: Single Client Drop/Mark Rate (ITO=3) Overall drop/mark rate of entire network reduced Short connections rarely drop packets Worcester Polytechnic Institute

Study of Foreground Traffic To Better Show Queue Management Policy’s Effects on Fairness of TCP Connections G & M Injected 10 Short and 10 Long Foreground TCP Connections and Recorded the Response Times The Fairness Index of Response Times Computed Worcester Polytechnic Institute

Fairness – Short Flows RIO-PS Most Fair For Short Flows Worcester Polytechnic Institute

Fairness – Long Flows Long Flows Do Well Across The Board Worcester Polytechnic Institute

Transmission Time – Short Connections Worcester Polytechnic Institute

Transmission Time – Long Connections Worcester Polytechnic Institute

Table IV Summary of Overall Goodput Proposed Scheme Does Not Hurt Slightly Improves Overall Goodput Over Drop Tail Worcester Polytechnic Institute

Experiment 2: Unbalanced Requests Simulation Where File Requests on Different Routes are Unbalanced Small Requests on One Route / Large Requests on Another Proposed Scheme Reduced to Tradition Unclassified Traffic + RED i.e. No Different than RED Still Getting the First Few Packets Across w/ Preferential Treatment Does Help Reduce the Chance of Retransmission and Allows Short Connections to Finish Quickly Remainder of Results Omitted Worcester Polytechnic Institute

Discussion Simulation Model: Queue Management -“Dumbbell and Dancehall” Only one-way Different propagation delays not considered Queue Management RIO does not necessarily guarantee class-based flows PI controlled RED may be better solution http://www.cse.iitk.ac.in/users/braman Worcester Polytechnic Institute

Discussion Deployment Issues: Scheme Requires that Edge Devices be Able to Perform Per-flow State Maintenance and Per-packet Processing G & M Quote Previous Work [31] Stating this Does not Really Impact End-to-end Functionality Incrementally Deployable? Only Edge Routers Need to Be Configured Worcester Polytechnic Institute

Flow Classification Flow Classification: Threshold-based Classification Because Edge Node Cannot Predict This Allows the Beginning of a Long Flow to Look like a Short Flow This “mistake” Actually Helps Performance This Allows the First Few Packets of a Long TCP Flow to be Treated as the First Few of a Short Flow Makes System Fair to all TCP Connections Worcester Polytechnic Institute

Discussion Controller Design: Malicious Users Edge load control may not be so dependant upon the value of SLR More importantly are the values of Tc and Tu (small values may be more accurate but increase overhead) Malicious Users Users who would try and break long TCP connections into smaller segments The dynamic nature of edge routers should prevent such things Worcester Polytechnic Institute

Conclusions TCP = Majority of Bytes Flowing Over Internet Diffserv-like Architecture Where Edge Routers Classify Flow Size Core Routers Implement Simple RIO to give Preference to Short Flows Mice Get Better Response Time and Fairness Elephants are Enhanced (Slightly) or at Least Only Minimally Affected Goodput Enhanced (or at Least not Degraded) Flexible Architecture (Needs only Edge Tuning) Size-aware Traffic Management has Considerable Promise Worcester Polytechnic Institute