When to use and when not to use BBR:

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

A Survey of Recent High Speed TCP Variants
TCP Variants.
August 10, Circuit TCP (CTCP) Helali Bhuiyan
1 TCP Vegas: New Techniques for Congestion Detection and Avoidance Lawrence S. Brakmo Sean W. O’Malley Larry L. Peterson Department of Computer Science.
TCP Vegas: New Techniques for Congestion Detection and Control.
Congestion Control Algorithms: Open Questions Benno Overeinder NLnet Labs.
Worcester Polytechnic Institute Understanding Bufferbloat in Cellular Networks Haiqing Jiang, Zeyu Liu, Yaogong Wang, Kyunghan Lee, and Injong Rhee Presented.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)
Copyright © 2005 Department of Computer Science 1 Solving the TCP-incast Problem with Application-Level Scheduling Maxim Podlesny, University of Waterloo.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
Measurements of Congestion Responsiveness of Windows Streaming Media (WSM) Presented By:- Ashish Gupta.
1 USC INFORMATION SCIENCES INSTITUTE RAP: An End-to-End Congestion Control Mechanism for Realtime Streams in the Internet Reza Rejaie, Mark Handley, Deborah.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
TCP on High-Speed Networks Sangtae Ha and Injong Rhee North Carolina State University.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
1 Minseok Kwon and Sonia Fahmy Department of Computer Sciences Purdue University {kwonm, TCP Increase/Decrease.
WB-RTO: A Window-Based Retransmission Timeout Ioannis Psaras, Vassilis Tsaoussidis Demokritos University of Thrace, Xanthi, Greece.
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
ISCSI Performance Experiments Li Yin EECS Department U.C.Berkeley.
Low-Rate TCP Denial of Service Defense Johnny Tsao Petros Efstathopoulos Tutor: Guang Yang UCLA 2003.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
A Selective Retransmission Protocol for Multimedia on the Internet Mike Piecuch, Ken French, George Oprica and Mark Claypool Computer Science Department.
TCP Behavior across Multihop Wireless Networks and the Wired Internet Kaixin Xu, Sang Bae, Mario Gerla, Sungwook Lee Computer Science Department University.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
Distance-Dependent RED Policy (DDRED)‏ Sébastien LINCK, Eugen Dedu and François Spies LIFC Montbéliard - France ICN07.
1 TCP-BFA: Buffer Fill Avoidance September 1998 Amr A. Awadallah Chetan Rai Computer Systems.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
High TCP performance over wide area networks Arlington, VA May 8, 2002 Sylvain Ravot CalTech HENP Working Group.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
Advance Computer Networks Lecture#09 & 10 Instructor: Engr. Muhammad Mateen Yaqoob.
An Evaluation of Fairness Among Heterogeneous TCP Variants Over 10Gbps High-speed Networks Lin Xue*, Suman Kumar', Cheng Cui* and Seung-Jong Park* *School.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Increasing TCP's CWND based on Throughput draft-you-iccrg-throughput-based-cwnd-increasing-00 Jianjie You IETF92 Dallas.
Low-Latency Software Rate Limiters for Cloud Networks
Window Control Adjust transmission rate by changing Window Size
TCP Vegas Congestion Control Algorithm
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
CUBIC Marcos Vieira.
Satellite TCP Lecture 19 04/10/02.
TCP loss sensitivity analysis
TCP Vegas: New Techniques for Congestion Detection and Avoidance
Google’s BBR Congestion control algorithm
Transport Protocols over Circuits/VCs
Khiem Lam Jimmy Vuong Andrew Yang
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
Open Issues in Router Buffer Sizing
Mark Claypool, Feng Li and Jae Chung
Understanding Throughput & TCP Windows
ECF: an MPTCP Scheduler to Manage Heterogeneous Paths
Lecture 19 – TCP Performance
Concurrent Multipath Transfer (CMT)
The Impact of Multihop Wireless Channel on TCP Performance
Congestion Control in SDN-Enabled Networks
If both sources send full windows, we may get congestion collapse
Project-2 (20%) – DiffServ and TCP Congestion Control
TCP Congestion Control
Congestion Control in SDN-Enabled Networks
Review of Internet Protocols Transport Layer
Evaluation of Objectivity/AMS on the Wide Area Network
Designing a Relative Delay Estimator for Multipath Transport
Presentation transcript:

When to use and when not to use BBR: An empirical analysis and evaluation study Yi Cao, Arpit Jain, Kriti Sharma, Aruna Balasubramanian, and Anshul Gandhi Department of Computer Science, Stony Brook University

Introduction of BBR (1/2) Traditional TCP algorithms are loss-based Linux’s default TCP Algorithm — CUBIC Reduces the throughput by 30% when a loss occur Loss might not be a good congestion signal Shallow buffer: Misinterprets the congestion Deep buffer: Bufferbloat Packet Loss Tx ↓30% Evolution of CUBIC’s Throughput Shallow Buffer Packet Loss Deep Buffer Bufferbloat

Introduction of BBR (2/2) Google proposed BBR in 2016 Instead of using packet losses at congestion signals Relies on measured Bandwidth and RTT TCP pacing: Inter-packet spacing BBR already used in Google Cloud and YouTube x BDP Bandwidth-Delay Product x 2 cwnd Congestion Window Size BW Max Min RTT Packet k+1 Packet k Pacing Spacing

Motivation for BBR Evaluation Before adopting BBR, it is important to answer: When is BBR useful? What is the downside of ignoring packet losses? Is BBR unfair to loss-based algorithms? Prior works Unfairness issue – ICNP 2017 (Hock et al.) Very few buffer sizes, or a single testbed High packet retransmissions – IFIP 2018 (Scholz et al.), ITC 2018 (Hurtig et al.) No reasoning behind huge loss Goal of our work: Show when BBR is useful + Reveal root causes

Experimental Setup Deploying traffic controller on the router for fine-grained control Set bandwidth, delay and buffer size Linux Traffic Control (TC) – NetEm, Token Bucket Filter (TBF) Experimental Networks Mininet — Virtual Network (40µs RTT) LAN (40µs RTT), WAN (7ms RTT) Receiver Sender Linux Traffic Control (Token Bucket Filter)

Network Configuration Decision Tree Goodput – BBR vs CUBIC 640 iPerf3 transfers (1min) in the LAN network We generalize the goodput values by a decision tree Network Configuration BBR Cubic BDP — Bandwidth x Delay Small BDP, Deep Buffer CUBIC > BBR Large BDP, Shallow Buffer BBR > CUBIC

Q1) When is BBR Useful BBR has significantly higher goodput than CUBIC in shallow buffers Focus on shallow buffer (100KB) We define goodput improvement: BBR Ignore Losses BBR CUBIC Congestion Signal BW, RTT Loss

Q2) What is the Downside of Ignoring Losses BBR incurs huge packet losses in shallow buffers BBR keeps 2BDP data in the network BBR provides high goodput but at the expense of high packet retransmissions

Q3) Is BBR Unfair to Loss-based Algorithms? BBR and CUBIC are unfair in different buffer sizes Run BBR and CUBIC concurrently in Mininet Shallow buffer: BBR higher share Deep buffer: CUBIC higher share Different behavior in WAN network BW stabilizes at 20KB – Indicates bottleneck buffer in the wild Mininet (BW:1Gbps, RTT:20ms) BBR Better CUBIC BBR WAN Results Receiver Different Buffer sizes CUBIC Stabilizes at 20KB Unfairness between BBR and CUBIC depends on the bottleneck buffer size

Cliff Point Root-Cause Analysis Abrupt drop in goodput when loss rate reaches 20% Our analysis: max_pacing_gain is the culprit Scaling factor to probe for more bandwidth Cliff Point 100Mbps BW 1.25 x max_pacing_gain Determines Mininet: 100Mbps BW, 25ms RTT, 10MB Buffer < 80% x success rate 125Mbps Throughput < 100Mbps Actual BW Cliff point = 1 – 1/max_pacing_gain The max_pacing_gain parameter is the root cause for the goodput “cliff point”

BBR Evaluation Summary BBR is an emerging TCP variant Not loss-based Insufficient evaluations Takeaways In terms of goodput, BBR is well suited for networks with shallow buffers, despite its high retransmissions Unfairness between BBR and Cubic depends on the bottleneck buffer size The maximum pacing_gain parameter is the root cause for the goodput “cliff point” For future work Enable auto-tuning for BBR parameters to avoid high retransmissions and maintain fairness We are actively investigating BBR v2