Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

Introducing optical switching into the network
1 Understanding Buffer Size Requirements in a Router Thanks to Nick McKeown and John Lockwood for numerous slides.
Congestion Control Algorithms: Open Questions Benno Overeinder NLnet Labs.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Practice Questions: Congestion Control and Queuing
CS 4700 / CS 5700 Network Fundamentals Lecture 12: Router-Aided Congestion Control (Drop it like it’s hot) Revised 3/18/13.
Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
The Tension Between High Video Rate and No Rebuffering Te-Yuan (TY) Huang Stanford University IRTF Open 87 July 30th, 2013 Joint work Prof.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
Designing Networks with Little or No Buffers or Can Gulliver Survive in Lilliput? Yashar Ganjali High Performance Networking Group Stanford University.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Buffer Sizing for Congested Internet Links Chi Yin Cheung Cs 395 Advanced Networking.
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
Modeling TCP in Small-Buffer Networks
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
1 Reliability & Flow Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Rigorous fair Queueing requires per flow state: too costly in high speed core routers.
Routers with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Fair Queueing requires per flow state: too costly in high speed core routers Yet, some.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
Buffer requirements for TCP: queueing theory & synchronization analysis Gaurav RainaDamon Wischik CambridgeUCL.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
CS144 An Introduction to Computer Networks
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
1 - CS7701 – Fall 2004 Review of: Sizing Router Buffers Paper by: – Guido Appenzeller (Stanford) – Isaac Keslassy (Stanford) – Nick McKeown (Stanford)
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
Parameswaran, Subramanian
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
1 Network Transport Layer: TCP Analysis and BW Allocation Framework Y. Richard Yang 3/30/2016.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
Networks with Very Small Buffers Yashar Ganjali, Guido Appenzeller, High Performance Networking Group Prof. Ashish Goel, Prof. Tim Roughgarden, Prof. Nick.
Sachin Katti, CS244 Slides courtesy: Nick McKeown
Understanding Buffer Size Requirements in a Router
Chapter 3 outline 3.1 transport-layer services
Queue Dynamics with Window Flow Control
Open Issues in Router Buffer Sizing
Lecture 19 – TCP Performance
The Impact of Multihop Wireless Channel on TCP Performance
Project-2 (20%) – DiffServ and TCP Congestion Control
Transport Layer: Congestion Control
Routers with Very Small Buffers
Review of Internet Protocols Transport Layer
Presentation transcript:

Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23,

23 February 2005High Performance Networking Group 2 Motivation Problem – Internet traffic is doubled every year – Disparity between traffic and router growth (space, power, cost) Possible solution – All-optical networking Consequences – Large capacity  large traffic – Very small buffers

23 February 2005High Performance Networking Group 3 Outline of the Talk Buffer sizes in today’s Internet From huge to small (Guido’s results) – 2-3 orders of magnitude reduction From small to tiny – Constant buffer sizes?

23 February 2005High Performance Networking Group 4 Backbone Router Buffers Universally applied rule-of-thumb – A router needs a buffer size: B=2TxC 2T is the two-way propagation delay C is the capacity of the bottleneck link Known to the inventors of TCP Mandated in backbone routers Appears in RFPs and IETF architectural guidelines C Router Source Destination 2T

23 February 2005High Performance Networking Group 5 Review: TCP Congestion Control Only W packets may be outstanding Rule for adjusting W – If an ACK is received: W ← W+1/W – If a packet is lost:W ← W/2 SourceDest t Window size

23 February 2005High Performance Networking Group 6 Multiplexing Effect in the Core Probability Distribution B 0 Buffer Size

23 February 2005High Performance Networking Group 7 Backbone router buffers It turns out that – The rule of thumb is wrong for a core routers today – Required buffer is instead of

23 February 2005High Performance Networking Group 8 Required Buffer Size Simulation

23 February 2005High Performance Networking Group 9 Impact on Router Design 10Gb/s linecard with 200,000 x 56kb/s flows – Rule-of-thumb: Buffer = 2.5Gbits Requires external, slow DRAM – Becomes: Buffer = 6Mbits Can use on-chip, fast SRAM Completion time halved for short-flows 40Gb/s linecard with 40,000 x 1Mb/s flows – Rule-of-thumb: Buffer = 10Gbits – Becomes: Buffer = 50Mbits

23 February 2005High Performance Networking Group 10 How small can buffers be? Imagine you want to build an all-optical router for a backbone network… …and you can build a few dozen packets in delay lines. Conventional wisdom: It’s a routing problem (hence deflection routing, burst-switching, etc.) Our belief: First, think about congestion control.

23 February 2005High Performance Networking Group 11 TCP with ALMOST No Buffers Utilization of bottleneck link = 75%

23 February 2005High Performance Networking Group 12 Problem Solved? 75% utilization with only one unit of buffering More flows  Less buffer Therefore, one unit of buffering is enough

23 February 2005High Performance Networking Group 13 TCP Throughput withSmall Buffers

23 February 2005High Performance Networking Group 14 TCP Reno Performance

23 February 2005High Performance Networking Group 15 Two Concurrent TCP Flows

23 February 2005High Performance Networking Group 16 Simplified Model Flow 1 sends W packets during each RTT Bottleneck capacity = C packets per RTT Example: C = 15, W = 5 Flow 2 sends two consecutive packets during each RTT Drop probability is increased with W 1 RTT

23 February 2005High Performance Networking Group 17 Simplified Model (Cont’d) W(t+1) = p(t)x[W(t)/2] + [1-p(t)]x[W(t)+1] But, p grows linearly with W E[W] = O(C  ½ ) Link utilization = W/C As C increases, link utilization goes to zero. Snow model!!!

23 February 2005High Performance Networking Group 18 Q&A Q. What happens if flow 2 never sends any consecutive packets? A. No packet drops unless utilization = 100%. Q. How much space we need between the two packets? A. At least the size of a packet. Q. What if we have more than two flows?

23 February 2005High Performance Networking Group 19 Per-flow Queueing Let us assume we have a queue for each flow; and Server those queues in a round robin manner. Does this solve the problem?

23 February 2005High Performance Networking Group 20 Per-flow Buffering

23 February 2005High Performance Networking Group 21 Per-Flow Buffering Flow 3, does not have a packet at time t; Flows 1 and 2 do. At time t+RTT we will see a drop. Temporarily Idle Time t Time t + RTT

23 February 2005High Performance Networking Group 22 Ideal Solution If packets are spaced out perfectly; and The starting times of flows are chosen randomly; We only need a small buffer for contention resolution.

23 February 2005High Performance Networking Group 23 Randomization Mimic an M/M/1 queue Under low load, queue size is small with high Probability Loss can be bounded  1 1 M/M/1 X b P(Q > b) Buffer B Packet Loss

23 February 2005High Performance Networking Group 24 TCP Pacing Current TCP: – Send packets when ACK received. Paced TCP: – Send one packet every W/RTT time units. – Update W, and RTT similar to TCP

23 February 2005High Performance Networking Group 25 CWND: Reno vs. Paced TCP

23 February 2005High Performance Networking Group 26 TCP Reno: Throughput vs. Buffer Size

23 February 2005High Performance Networking Group 27 Paced TCP: Throughput vs. Buffer Size

23 February 2005High Performance Networking Group 28 Early Results Congested core router with 10 packet buffers. Average offered load = 80% RTT = 100ms; each flow limited to 2.5Mb/s router source server source 10Gb/s >10Gb/s

23 February 2005High Performance Networking Group 29 What We Know Arbitrary Injection Process If Poisson Process with load < 1 Complete Centralized Control Any rate > 0 need unbounded buffers TheoryExperiment Need buffer size of approx: O(logD + logW) i.e pkts D=#of hops W=window size [Goel 2004] TCP Pacing: Results as good or better than for Poisson Constant fraction throughput with constant buffers [Leighton]

23 February 2005High Performance Networking Group 30 Limited Congestion Window RENO PACING Limited Window Unlimited Window

23 February 2005High Performance Networking Group 31 Slow Access Links router source server source 10Gb/s 5Mb/s Congested core router with 10 packet buffers. RTT = 100ms; each flow limited to 2.5Mb/s

23 February 2005High Performance Networking Group 32 Conclusion We can reduce 1,000,000 packet buffers to 10,000 today. We can “probably” reduce to packet buffers: – With many small flows, no change needed. – With some large flows, need pacing in the access routers or at the edge devices. Need more work!

23 February 2005High Performance Networking Group 33 Extra Slides

23 February 2005High Performance Networking Group 34 Pathological Example Flow 1: S1  D; Load = 50% Flow 2: S2  D; Load = 50% If S 1 sends a packet at time t, S 2 cannot send any packets at time t, and t+1. To achieve 100% throughput we need at least one unit of buffering. 12 S1S1 S2S2 D 34