One More Bit Is Enough Yong Xia, RPI Lakshmi Subramanian, UCB Ion Stoica, UCB Shiv Kalyanaraman, RPI SIGCOMM’ 05, Philadelphia, PA 08 / 23 / 2005.

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

Balaji Prabhakar Active queue management and bandwidth partitioning algorithms Balaji Prabhakar Departments of EE and CS Stanford University
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Impact of Background Traffic on Performance of High-speed TCPs
Reconsidering Reliable Transport Protocol in Heterogeneous Wireless Networks Wang Yang Tsinghua University 1.
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
TCP and Congestion Control
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Doc.: IEEE /037r1 Submission March 2001 Khaled Turki et. al,Texas InstrumentsSlide 1 Simulation Results for p-DCF, v-DCF and Legacy DCF Khaled.
1 Backward Congestion Notification Version 2.0 Davide Bergamasco Rong Pan Cisco Systems, Inc. IEEE.
Ramin Khalili (T-Labs/TUB) Nicolas Gast (LCA2-EPFL)
1 Improving TCP/IP Performance Over Wireless Networks Authors: Hari Balakrishnan, Srinivasan Seshan, Elan Amir and Randy H. Katz Presented by Sampoorani.
A Switch-Based Approach to Starvation in Data Centers Alex Shpiner and Isaac Keslassy Department of Electrical Engineering, Technion. Gabi Bracha, Eyal.
CS 4700 / CS 5700 Network Fundamentals
1 EE 122: Networks Performance & Modeling Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks.
TCP Probe: A TCP with Built-in Path Capacity Estimation Anders Persson, Cesar Marcondes, Ling-Jyh Chen, Li Lao, M. Y. Sanadidi, Mario Gerla Computer Science.
1 COPYRIGHT © 2011 ALCATEL-LUCENT. ALL RIGHTS RESERVED. On the Capacity of Wireless CSMA/CA Multihop Networks Rafael Laufer and Leonard Kleinrock Bell.
Ningning HuCarnegie Mellon University1 Improving TCP Startup Performance using Active Measurements Ningning Hu, Peter Steenkiste Carnegie Mellon University.
1 EE 122:TCP Congestion Control Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern Paxson,
RED-PD: RED with Preferential Dropping Ratul Mahajan Sally Floyd David Wetherall.
On Individual and Aggregate TCP Performance Lili Qiu Yin Zhang Srinivasan Keshav Cornell University 7th International Conference on Network Protocols Toronto,
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
CSIT560 Internet Infrastructure: Switches and Routers Active Queue Management Presented By: Gary Po, Henry Hui and Kenny Chong.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli University of Calif, Berkeley and Lawrence Berkeley National Laboratory SIGCOMM.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
CS 268: Lecture 7 (Beyond TCP Congestion Control) Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences University.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Router-assisted congestion control Lecture 8 CS 653, Fall 2010.
CS268: Beyond TCP Congestion Control Ion Stoica February 9, 2004.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
TCP Stability and Resource Allocation: Part II. Issues with TCP Round-trip bias Instability under large bandwidth-delay product Transient performance.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
One More Bit Is Enough Yong Xia, RPI Lakshminarayanan Subramanian, UCB Ion Stoica, UCB Shivkumar Kalyanaraman, RPI SIGCOMM’05, August 22-26, 2005, Philadelphia,
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley and Charlie Rohrs SIGCOMM2002 Pittsburgh Presenter – Bob Kinicki.
1 Minseok Kwon and Sonia Fahmy Department of Computer Sciences Purdue University {kwonm, All our slides and papers.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
AQM for Congestion Control1 A Study of Active Queue Management for Congestion Control Victor Firoiu Marty Borden.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Comparison between TCPWestwood and eXplicit Control Protocol (XCP) Jinsong Yang Shiva Navab CS218 Project - Fall 2003.
1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug 1993), pp
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
1 A State Feedback Control Approach to Stabilizing Queues for ECN- Enabled TCP Connections Yuan Gao and Jennifer Hou IEEE INFOCOM 2003, San Francisco,
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
Network Technologies essentials Week 8: TCP congestion control Compilation made by Tim Moors, UNSW Australia Original slides by David Wetherall, University.
U Innsbruck Informatik - 1 CADPC/PTP in a nutshell Michael Welzl
Congestion Control in CSMA-Based Networks with Inconsistent Channel State V. Gambiroza and E. Knightly Rice Networks Group
Congestion Control for High Bandwidth-Delay Product Networks D. Katabi (MIT), M. Handley (UCL), C. Rohrs (MIT) – SIGCOMM’02 Presented by Cheng.
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Jennifer Rexford Fall 2014 (TTh 3:00-4:20 in CS 105) COS 561: Advanced Computer Networks TCP.
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
PCP: Efficient Endpoint Congestion Control NSDI, 2006 Thomas Anderson, Andrew Collins, Arvind Krishnamurthy and John Zahorjan University of Washington.
Flow Control in Multimedia Communication Multimedia Systems and Standards S2 IF Telkom University.
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
TeXCP: Protecting Providers’ Networks from Unexpected Failures & Traffic Spikes Dina Katabi MIT - CSAIL nms.csail.mit.edu/~dina.
1 Network Transport Layer: TCP Analysis and BW Allocation Framework Y. Richard Yang 3/30/2016.
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
Congestion Control for High Bandwidth-Delay Product Networks
CS 268: Lecture 6 Scott Shenker and Ion Stoica
TCP Congestion Control
TCP Congestion Control
Queue Dynamics with Window Flow Control
Internet Congestion Control Research Group
Stability of Congestion Control Algorithms Using Control Theory with an application to XCP Ioannis Papadimitriou George Mavromatis.
TCP Congestion Control
Presentation transcript:

One More Bit Is Enough Yong Xia, RPI Lakshmi Subramanian, UCB Ion Stoica, UCB Shiv Kalyanaraman, RPI SIGCOMM’ 05, Philadelphia, PA 08 / 23 / 2005

VCP2 Motivation #1: TCP does not perform well in high b/w bottleneck utilization round-trip delay (ms) TCP 50% 100% bandwidth (Mbps) bottleneck utilization TCP 70% 100%

VCP3 Motivation #1: Why TCP does not scale?  TCP uses binary congestion signals, such as loss or one-bit Explicit Congestion Notification (ECN) time congestion window Multiplicative Decrease (MD) Additive Increase (AI) congestion slow start congestion avoidance  AI with a fixed step-size can be very slow for large bandwidth slow!

VCP4 Motivation #2: XCP scales  XCP decouples efficiency control and fairness control x senderreceiverrouter C spare bandwidth -- MIMD DATA -- AIMD rate & rtt ACK  rate  But, XCP needs multiple bits (128 bits in its current IETF draft) to carry the congestion-related information from/to network

VCP Goal 5 Design a TCP-like scheme that:  requires a small amount of congestion information (e.g., 2 bits)  scales across a wide range of network scenarios

VCP Key Observation 6 Fairness is not critical in low-utilization region  Use Multiplicative Increase (MI) for fast convergence onto efficiency in this region  Handle fairness in high-utilization region

VCP7 Variable-structure congestion Control Protocol (VCP) senderreceiver x router  Routers signal the level of congestion  End-hosts adapt the control algorithm accordingly traffic ratelink capacity (11) (10) (01) code load factor region low-load high-load overload control Multiplicative Decrease (MD) Additive Increase (AI) Multiplicative Increase (MI) range of interest ACK 2-bit ECN 0 1 scale-free 2-bit ECN

VCP8 VCP vs. ECN senderreceiver x router code load factor region overload control Multiplicative Decrease (MD) 0 1  ECN doesn’t differentiate between low-load and high-load regions TCPAQM / ECN (11) (10) (01) low-load high-load Additive Increase (AI) Multiplicative Increase (MI) ACK 2-bit ECN VCP (1) (0) underloadAdditive Increase (AI) ACK 1-bit ECN

VCP9 An illustration example  MI tracks available bandwidth exponentially fast time ( sec )  After high utilization is attained, AIMD provides fairness link utilization flow cwnd (pkt) 9 flows join and then leave MIAIMD

VCP10 VCP vs. ECN TCP+RED/ECN cwnd utilization time (sec) VCP utilization cwnd time (sec)

VCP11 VCP key ideas and properties  Achieve high efficiency, low loss, and small queue  Fairness model is similar to TCP:  Long flows get lower bandwidth than in XCP (proportional vs. max-min fairness)  Fairness convergence much slower than XCP  Use network link load factor as the congestion signal  Decouple efficiency and fairness controls in different load regions overload high-load low-load router fairness control efficiency controlMI AIMD end-host

VCP12 Major design issues  At the router  How to measure and encode the load factor?  At the end-host  When to switch from MI to AI?  What MI / AI / MD parameters to use?  How to handle heterogeneous RTTs? control Multiplicative Decrease (MD) Additive Increase (AI) Multiplicative Increase (MI) (11) (10) (01) codeloadregion low-load high-load overload

VCP13 Design issue #1: measuring and encoding load factor  Calculate the link load factor  demand load_ factor capacity = link_bandwidth * t  arrival_traffic + queue_size = t  = 200ms  most rtt  The load factor is quantized and encoded into the two ECN bits information loss  affects fairness capacity demand

VCP14 Design issue #2: setting MI / AI / MD parameters ( , ,  ) overload high-load low-load MI AIMD

VCP Design issue #2: setting MI / AI / MD parameters ( , ,  )  Q: load factor transition point  * for MI  AI? load factor 100% 80% safety margin 0% TCP :  = 1.0 VCP :  = 0.06 STCP :  = 0.01 MI AI MD k (1   * ) /  * where k = 0.25 ( for stability) 80% 20% 1.0  =  =  =  * =

VCP Design issue #3: Handling RTT heterogeneity for MI/AI  Scale  to prevent MI from overshooting capacity when RTT is small 16 rtt < t  overshoot  s s if rtt < t  rtt = t   tt tt time spare capacity 1

VCP VCP scales across b/w, rtt, num flows  Evaluation using extensive ns2 simulations 17  150Mbps, 80ms, 50 forward flows and 50 reverse flows

VCP VCP achieves high efficiency 18 bottleneck utilization TCP VCP XCP bandwidth (Mbps) bottleneck utilization TCP VCP XCP round-trip delay (ms)

VCP VCP minimizes packet loss rate 19 TCP VCPXCP round-trip delay (ms) packet loss rate TCP VCPXCP bandwidth (Mbps) packet loss rate TCP VCP XCP TCP VCP XCP bottleneck utilization

VCP VCP comparisons 20  Compared to TCP+AQM/ECN  Same architecture (end-hosts control, routers signal)  Router congestion detection: queue-based  load-based  Router congestion signaling: 1-bit  2-bit ECN  End-host adapts (MI/AI/MD) according to the ECN feedback  End-host scales its MI/AI parameters with its RTT  Compared to XCP  Decouple efficiency/fairness control across load regions  Functionality primarily placed at end-hosts, not in routers

VCP Theoretical results 21  Assumptions:  One bottleneck of infinite buffer space is shared by synchronous flows that have identical RTTs;  The exact value of load factor is echoed back.  Theorem for the VCP fluid model:  It is globally stable with a unique and fair equilibrium, if k  0.5;  The equilibrium is max-min fair for general topologies;  The equilibrium is optimal by achieving all the design goals.  VCP protocol differs from the model in fairness.

VCP Conclusions 22  With a few minor changes over TCP + AQM / ECN, VCP is able to approximate the performance of XCP  High efficiency  Low persistent bottleneck queue  Negligible congestion-caused packet loss  Reasonable (i.e., TCP-like) fairness

VCP Future work 23  How do we get there, incrementally?  End-to-end VCP  TCP-friendliness  Incentive  Extensions  Applications: short-lived data traffic, real-time traffic  Environment: wireless channel  Security  Robust signaling, e.g., ECN nonce

VCP The end 24 Thanks!

VCP Design issue #3: Handling RTT heterogeneity for MI/AI 25  TCP throughput is biased against flows with large RTT packet_size rate rtt  sqrt( loss_rate )   VCP scales  for fair rate sharing (regardless of RTT) rate = cwnd / rtt

VCP VCP keeps small bottleneck queue 26 queue length in % buffer size TCPVCPXCP bandwidth (Mbps) queue length in % buffer size TCPVCPXCP round-trip delay (ms) per-flow b/w-delay- product < 1 packet

VCP Vary the number of flows 27 number of flows VCP TCP bottleneck utilization queue length in % buffer size XCP VCP TCP XCP

VCP Influence of RTT on fairness  To some extent, VCP distributes reasonably fairly 28

VCP Influence of RTT on fairness (cont’d) 29

VCP VCP converges onto fairness 30

VCP VCP converges onto fairness faster with more bits 31

VCP Responsiveness 50 flows flows – 150 flows 32