The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Appropriateness of Transport Mechanisms in Data Grid Middleware Rajkumar Kettimuthu 1,3, Sanjay Hegde 1,2, William Allcock 1, John Bresnahan 1 1 Mathematics.
Transport Layer3-1 TCP AIMD multiplicative decrease: cut CongWin in half after loss event additive increase: increase CongWin by 1 MSS every RTT in the.
Playback-buffer Equalization For Streaming Media Using Stateless Transport Prioritization By Wai-tian Tan, Weidong Cui and John G. Apostolopoulos Presented.
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
CS640: Introduction to Computer Networks Mozafar Bag-Mohammadi Lecture 3 TCP Congestion Control.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Selfish Behavior and Stability of the Internet: A Game-Theoretic Analysis of TCP Presented by Shariq Rizvi CS 294-4: Peer-to-Peer Systems.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
The War Between Mice and Elephants LIANG GUO, IBRAHIM MATTA Computer Science Department Boston University ICNP (International Conference on Network Protocols)
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.
Internet Traffic Patterns Learning outcomes –Be aware of how information is transmitted on the Internet –Understand the concept of Internet traffic –Identify.
AQM for Congestion Control1 A Study of Active Queue Management for Congestion Control Victor Firoiu Marty Borden.
Network Bandwidth Allocation (and Stability) In Three Acts.
A simulation-based comparative evaluation of transport protocols for SIP Authors: M.Lulling*, J.Vaughan Department of Computer science, University college.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Low Delay Marking for TCP in Wireless Ad Hoc Networks Choong-Soo Lee, Mingzhe Li Emmanuel Agu, Mark Claypool, Robert Kinicki Worcester Polytechnic Institute.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
The Impact of Multihop Wireless Channel on TCP Throughput and Loss Zhenghua Fu, Petros Zerfos, Haiyun Luo, Songwu Lu, Lixia Zhang, Mario Gerla INFOCOM2003,
A TCP With Guaranteed Performance in Networks with Dynamic Congestion and Random Wireless Losses Stefan Schmid, ETH Zurich Roger Wattenhofer, ETH Zurich.
Promoting the Use of End-to- End Congestion Control in the Internet Sally Floyd and Kevin Fall Presented by Scott McLaren.
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
Performance Enhancement of TFRC in Wireless Ad Hoc Networks Mingzhe Li, Choong-Soo Lee, Emmanuel Agu, Mark Claypool and Bob Kinicki Computer Science Department.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
Transport Level Protocol Performance Evaluation for Bulk Data Transfers Matei Ripeanu The University of Chicago Abstract:
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
Sharing Information across Congestion Windows CSE222A Project Presentation March 15, 2005 Apurva Sharma.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
UDT: UDP based Data Transfer Protocol, Results, and Implementation Experiences Yunhong Gu & Robert Grossman Laboratory for Advanced Computing / Univ. of.
B 李奕德.  Abstract  Intro  ECN in DCTCP  TDCTCP  Performance evaluation  conclusion.
1 On Class-based Isolation of UDP, Short-lived and Long-lived TCP Flows by Selma Yilmaz Ibrahim Matta Computer Science Department Boston University.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
1 Modeling and Performance Evaluation of DRED (Dynamic Random Early Detection) using Fluid-Flow Approximation Hideyuki Yamamoto, Hiroyuki Ohsaki Graduate.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
1 Capacity Dimensioning Based on Traffic Measurement in the Internet Kazumine Osaka University Shingo Ata (Osaka City Univ.)
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
The Macroscopic behavior of the TCP Congestion Avoidance Algorithm.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Internet Measurement and Analysis Vinay Ribeiro Shriram Sarvotham Rolf Riedi Richard Baraniuk Rice University.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
HP Labs 1 IEEE Infocom 2003 End-to-End Congestion Control for InfiniBand Jose Renato Santos, Yoshio Turner, John Janakiraman HP Labs.
1 Sheer volume and dynamic nature of video stresses network resources PIE: A lightweight latency control to address the buffer problem issue Rong Pan,
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Dynamic Behavior of Slowly Responsive Congestion Control Algorithms (Bansal, Balakrishnan, Floyd & Shenker, 2001)
TCP Vegas: New Techniques for Congestion Detection and Avoidance
CUBIC Marcos Vieira.
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
Lecture 19 – TCP Performance
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
FAST TCP : From Theory to Experiments
COMP/ELEC 429/556 Introduction to Computer Networks
Javad Ghaderi, Tianxiong Ji and R. Srikant
Presentation transcript:

The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting

Problem  High performance computing community is making use of parallel TCP sockets to increase end-to-end throughput  There are concerns about the effectiveness, fairness and efficiency of parallel flows  This research uses simulation to investigate the effectiveness, fairness and efficiency questions  Based on simulations with empirically based loss model, parallel TCP is effective and efficient, but not always fair  May be possible to improve fairness

Outline  Introduction  Motivation  Background  Simulation  Evaluation  Conclusion

Introduction  HPC community needs high speed bulk throughput  Using parallel TCP flows to increase throughput  Examples  Bbcp - Stanford Linear Accelerator (SLAC)  Globus - Argonne National Lab  GridFTP – Grid Forum and ANL  Storage Resource Broker – San Diego Supercomputer Center  PSockets Library – University of Illinois at Chicago  SLAC has extensive measurements that demonstrates successful use

Introduction  Actual end-to-end network throughput is much less than expected  Host and Network tuning helps a little  Infrastructure upgrades help a little  But after tuning, throughput still much less than expected  Network measurements gathered from infrastructure show available unused bandwidth (“head room”)  Observed packet loss rate from transfers are too high to support high throughput bulk data transfers

Introduction  Networking community discourages use of Parallel TCP flows  May cause congestion collapse at worst  Unfair to single stream flows at best  This is based on the belief that packet losses are due exclusively to network overload

Motivation  This research examines the use of parallel TCP flows on shared networks  Goals of the research are to determine if parallel TCP is  Effective  Fair  Efficient

Motivation  Effective  Does the use of parallel TCP flows increase aggregate throughput?  Fair  Does the use of parallel TCP flows steal bandwidth from competing TCP flows?  Efficient  Does the use of parallel TCP flows improve the overall efficiency of the network bottleneck?

Outline  Introduction  Motivation  Background  Simulation  Evaluation  Conclusion

Background  Factors that affect TCP throughput  Maximum Segment Size (MSS)  Maximum TCP segment size  Limited by maximum frame size supported by network  Round Trip Time (RTT)  Depends on  Length of network  Load on network (queueing delays)  Packet Loss Rate  Number of packets dropped / Number of packets transmitted  Packet losses considered a sign of overload

Background  Packet Loss  Most dynamic factor of the three  High rates of packet loss limits throughput  Cause assumed to be exclusively from overload  Statistical distribution of packet loss is important

Background  Sources of Packet Loss  Network bottleneck overload  Other sources  Hardware and Software Bugs  Faulty Hardware  Others…

Background  Implication  When there is no congestion, packet loss from other sources limits throughput  Evidence of non congestion packet loss  Lack of recorded drops in routers  Underutilized network links  Packet drops present in TCP sessions that are not due to overload

Background  Parallel TCP flows  Overcomes effects of packet loss on throughput  Recovers from loss faster than single stream  Averages out effects of non-congestion related packet losses

Outline  Introduction  Motivation  Background  Simulation  Evaluation  Conclusion

Simulation  NS2 simulation built to investigate the effectiveness, fairness, and efficiency of parallel TCP flows

Simulation  Loss Model in simulator is critical  Measurements from real transfers used to build loss model  153 data transfers from U-M to Caltech  Performed over 3 days  Packet traces from experiments analyzed to extract losses  Source of Loss  Network operations centers certified no router drops during test  Bandwidth graph for network bottleneck showed underutilization

Simulation  Observed Loss Characteristics

Simulation  Right hand side of histogram

Simulation  Left Hand Side of Histogram  Intraburst Losses  Collection of exponential distributions  Between 61% and 78% of analyzed intrabursts fit an exponential distribution  Right Hand Side of Histogram  Interburst Losses  Fits a normal distribution

Simulation  Loss Models Considered  Constant Loss Probability  Random I.I.D.  Poisson Loss Arrival  Unconditional and Conditional Loss  A.k.a 2-state Markov or Gilbert  Kth Order Markov Loss Model  Extended Gilbert Model

Simulation  6-state Markov Model selected  6 states were enough to simulate throughput equivalent to observed  Markov chain used to drive a Markov Modulated Poisson Process (MMPP)  1 state is the loss state, 5 states no-loss  Sojourn time and transition probabilities from observed data  Poisson Loss Model used for the Loss State

Simulation  MultiState Loss Model in ns2 used to implement MMPP loss model  Extension made to ns2 to support MultiState Loss Model on multiple links in the simulator  Each simulation instance was run 10 times with different random seeds for the Loss Model  Total number of all simulations was over 3000

Outline  Introduction  Motivation  Background  Simulation  Evaluation  Conclusion

Evaluation  Effectiveness  Fairness  Efficiency

Evaluation  Effectiveness Question  Does the use of parallel TCP flows increase aggregate throughput?  Addressing the Question  Between 1 and 6 parallel flows simulated  No Cross Traffic

Evaluation  Effectiveness Results

Evaluation  Effectiveness Conclusion  Parallel flows improve aggregate throughput in the presence of systemic non-congestion related packet loss  Corroboration of simulation results with observed results

Evaluation  Effectiveness  Fairness  Efficiency

Evaluation  Fairness Question  Does the use of parallel TCP flows steal bandwidth from competing TCP flows?  Addressing the Question  Between 1 to 12 parallel flows  Between 1 to 5 cross streams of competing single stream traffic

Evaluation  Reading the Graphs Total Parallel Flow Throughput Total Single Stream Flow Throughput Network Bottleneck is 100 Mb/sec

Evaluation

 Fairness Conclusions  Fair when there is approximately more than 10% unused bandwidth  Unfair when there is no available bandwidth  Parallel TCP flows steal bandwidth from competing single stream flows to increase throughput when no unused bandwidth

Evaluation  Improving Fairness  Parallel flow aggressiveness due to  Increased recovery rate over single stream  Fractional response to packet drops  If we could make parallel flows only as aggressive as a single stream, can we preserve effectiveness and efficiency while improving fairness?

Evaluation  Slight modification to the TCP congestion avoidance algorithm  If n parallel flows are used, increase congestion window one packet for every n packets successfully transmitted, rather than one packet for every one packet successfully transmitted  Overall aggressiveness of n parallel flows is then the same as one single TCP flow  Simulation for 1 and 5 cross streams run with 1 to 20 parallel streams to investigate boundries

Evaluation

 Parallel flows with modification are about ½ as aggressive as parallel flows with no modification  Also found some asymptotic behavior as the number of parallel flows increased

Evaluation  Asymptotic behavior  Derived aggregate throughput of parallel flow with modified TCP

Evaluation

 Fairness Conclusions  Fair when there is more than 10% available bandwidth in bottleneck  Parallel flows steal from single stream flows when bottleneck is over 90% utilized  TCP modification  Reduces aggressiveness  Curbs ability of parallel flow to steal bandwidth as number of flows increase

Evaluation  Effectiveness  Fairness  Efficiency

Evaluation  Efficiency Results  Efficiency is increased when parallel flows used if there is unused bandwidth in bottleneck  When all nodes use same number of parallel flows  Efficiency maintained  Fairness maintained

Outline  Introduction  Motivation  Background  Simulation  Evaluation  Conclusion

Conclusions  Parallel flows are  Effective  Fair when bottleneck is utilized less than 90%  Unfair when bottleneck is near saturation  Efficient  TCP congestion avoidance algorithm can be modified to  Reduce aggressiveness by approximately 1/2  Maintain effectiveness and efficiency

Future Work  Implement modified algorithm for assessment  Further investigate loss models  Parameterization of loss models  Assessment of end-to-end networks loss characteristics  Investigate optimal TCP response to observed loss characteristics  Investigate stochastic analysis of parallel TCP over wide area networks