Sachin Katti, CS244 Slides courtesy: Nick McKeown

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

Introducing optical switching into the network
1 Understanding Buffer Size Requirements in a Router Thanks to Nick McKeown and John Lockwood for numerous slides.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28.
1 Statistical Analysis of Packet Buffer Architectures Gireesh Shrimali, Isaac Keslassy, Nick McKeown
The Tension Between High Video Rate and No Rebuffering Te-Yuan (TY) Huang Stanford University IRTF Open 87 July 30th, 2013 Joint work Prof.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
Designing Networks with Little or No Buffers or Can Gulliver Survive in Lilliput? Yashar Ganjali High Performance Networking Group Stanford University.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University
1 Architectural Results in the Optical Router Project Da Chuang, Isaac Keslassy, Nick McKeown High Performance Networking Group
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Network Processors and their memory Network Processor Workshop, Madrid 2004 Nick McKeown Departments of Electrical Engineering and Computer Science, Stanford.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug 1993), pp
Modeling TCP in Small-Buffer Networks
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
Computer Networking Lecture 19 – TCP Performance.
Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
The Gaussian Nature of TCP Mark Shifrin Supervisor: Supervisor: Dr. Isaac Keslassy M.Sc Seminar Faculty of Electrical Engineering.
Estimating Congestion in TCP Traffic Stephan Bohacek and Boris Rozovskii University of Southern California Objective: Develop stochastic model of TCP Necessary.
Routers with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
Buffer requirements for TCP: queueing theory & synchronization analysis Gaurav RainaDamon Wischik CambridgeUCL.
Buffer requirements for TCP Damon Wischik DARPA grant W911NF
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
CS144 An Introduction to Computer Networks
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
1 - CS7701 – Fall 2004 Review of: Sizing Router Buffers Paper by: – Guido Appenzeller (Stanford) – Isaac Keslassy (Stanford) – Nick McKeown (Stanford)
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Univ. of TehranComputer Network1 Computer Networks Computer Networks (Graduate level) University of Tehran Dept. of EE and Computer Engineering By: Dr.
Nick McKeown1 Building Fast Packet Buffers From Slow Memory CIS Roundtable May 2002 Nick McKeown Professor of Electrical Engineering and Computer Science,
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Queueing theory for TCP Damon Wischik, UCL Gaurav Raina, Cambridge.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
Networks with Very Small Buffers Yashar Ganjali, Guido Appenzeller, High Performance Networking Group Prof. Ashish Goel, Prof. Tim Roughgarden, Prof. Nick.
Understanding Buffer Size Requirements in a Router
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
Fast Pattern-Based Throughput Prediction for TCP Bulk Transfers
Open Issues in Router Buffer Sizing
Lecture 19 – TCP Performance
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
Chapter 6 TCP Congestion Control
CS640: Introduction to Computer Networks
Lecture 16, Computer Networks (198:552)
TCP Congestion Control
Techniques and problems for
Advanced Computer Networks
Transport Layer: Congestion Control
Routers with Very Small Buffers
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Review of Internet Protocols Transport Layer
Gaurav Raina Damon Wischik Mark Handley Cambridge UCL UCL
Designing a Relative Delay Estimator for Multipath Transport
Presentation transcript:

Sachin Katti, CS244 Slides courtesy: Nick McKeown Sizing Router Buffers Sachin Katti, CS244 Slides courtesy: Nick McKeown

Routers need Packet Buffers It’s well known that routers need packet buffers It’s less clear why and how much Goal of this work is to answer the question: How much buffering do routers need? Given that queueing delay is the only variable part of packet delay in the Internet, you’d think we’d know the answer already!

How much Buffer does a Router need? Source Router Destination C 2T Universally applied rule-of-thumb: A router needs a buffer size: 2T is the two-way propagation delay (or just 250ms) C is capacity of bottleneck link Context Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines.. Usually referenced to Villamizar and Song: “High Performance TCP in ANSNET”, CCR, 1994. Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design Notes: Widely used rule-of-thumb that router per interface needs one delay bw product or 2T*C Router serves link of capacity C, for this interface needs buffer of 2T*C The RFC is 3439 “Some Internet Architectural Guidelines

Only W=2 packets may be outstanding TCP Only W=2 packets may be outstanding Router Source Dest C’ > C C TCP Congestion Window controls the sending rate Sender sends packets, receiver sends ACKs Sending rate is controlled by Window W, At any time, only W unacknowledged packets may be outstanding The sending rate of TCP is Storyline: TCP controls sending rate using Congestion Window W The rule is that you never have more than W packets outstanding These outstanding packets can be in one of three places - the buffer, on the wire or dropped Here is a practical example Rate of TCP is W/RTT. Send W packets, wait one RTT before can send the next one. Don’t: Mention congestion Outstanding packet argument

For every W ACKs received, Single TCP Flow Router with large enough buffers for full link utilization For every W ACKs received, send W+1 packets B Source Dest C’ > C C t Window size RTT Storyline: With no buffer we can’t get full utilization Now let’s assume we have enough buffer for full utilization (and work ourselves backwards to how much that is) (click) If full utilization, bneck link is always full Clocked acks (in pkts/s) Clocked sends Suprising: TCP sends at constant rate, does not vary! It might stop occasionally (e.g. when window scales down) but will either send at C or not at all Occasionally sender increases W – what happens with those extra packets? Go into the buffer If buffer is full, they are dropped, window size reduced Summary if there is enough buffering All links are filled (allmost) all of the time Sender increases window, where do packets go? Extra packets are absorbed by the buffer Buffer compensates for changes in Window size Also follows sawtooth pattern (so does RTT) As R const, W sawtooth, RTT same sawtooth When no of outstanding packets is reduced (W->W/2), buffer has to compensate for scale down

Required buffer is height of sawtooth t

Buffer = rule of thumb R is const as RTT=Window

Over-buffered Link Story: Too much buffering When window size drops, buffer reduced but not empty Always fully utilized But additional latency More would be more latency

Under-buffered Link Story: Again the buffer empties when window is decreased But not enough packets outstanding to fill link Bottleneck link is underutilized

Origin of rule-of-thumb Before and after reducing window size, the sending rate of the TCP sender is the same Inserting the rate equation we get The RTT is part transmission delay T and part queueing delay B/C . We know that after reducing the window, the queueing delay is zero. Don’t: mention one of several ways Spend too much time 

Rule-of-thumb Rule-of-thumb makes sense for one flow Typical backbone link has > 20,000 flows Does the rule-of-thumb still hold? Answer: If flows are perfectly synchronized, then Yes. If flows are desynchronized then No. Comment: Today RoT is appied everywhere

Outline The Rule of Thumb The buffer requirements for a congested router Synchronized flows Desynchronized flows The 2T×C/sqrt(n) rule Buffer requirements for short flows (slow-start) Experimental Verification Conclusion

If flows are synchronized Story: If we have several flows, role of buffer is to absorb fluctuation in the sum of the window sizes In this case three flows, each flow has 1/3 of W of a single flow saturating the router If they are synchronized they add up to one big sawtooth t Aggregate window has same dynamics Therefore buffer occupancy has same dynamics Rule-of-thumb still holds.

When are Flows Synchronized? Small numbers of flows tend to synchronize Large aggregates of flows are not synchronized For > 200 flows, synchronization disappears Measurements in the core give no indication of synchronization

If flows are not synchronized Probability Distribution B Buffer Size Story What happens if flows are not synchronized What I mean with not synchronized is that the congestion windows W(t) independentl of each other What can we say about the sum of the congestion windows? We would expect the fluctuation to be smaller – statistical multiplexing! There is a simple argument we can make A – Same Distr - The congestion windows will have the same distribution as they see the same loss probability B - Indep - Sum of independent random variables should is according to CLT a gaussian Measure This is actually the case Parameters: 800 Mb/s 2000 Flows Each win size in low 10’s.

Quantitative Model Model congestion window of a flow as random variable model as where For many de-synchronized flows We assume congestion windows are independent All congestion windows have the same probability distribution XXX-Script Now central limit theorem gives us the distribution of the sum of the window sizes

Buffer vs. Number of Flows for a given Bandwidth If for a single flow we have For a given C, the window W scales with 1/n and thus Standard deviation of sum of windows decreases with n Story: How does the required buffer depend on number of flows? Let’s assume we know average window and standard deviation for a single flow If we have twice as many flows, each will have have half the mean and half the standard deviation Both scale with 1/n If we plug this into the formula from the last page, we find that standard deviation of the sum of windows should decrease with 1/sqrt(n) We would thus expect the buffer to decrease with 1/sqrt(n) Thus as n increases, buffer size should decrease

Required buffer size Simulation

Summary Flows in the core are desynchronized For desynchronized flows, routers need only buffers of Notes: - Mention contrary to what was previously assumed

Experimental Evaluation Overview Simulation with ns2 Over 10,000 simulations that cover range of settings Simulation time 30s to 5 minutes Bandwidth 10 Mb/s - 1 Gb/s Latency 20ms -250 ms, Physical router Cisco GSR with OC3 line card In collaboration with University of Wisconsin Experimental results presented here Long Flows - Utilization Mixes of flows - Flow Completion Time (FCT) Mixes of flows - Heavy Tailed Flow Distribution Short Flows – Queue Distribution

Long Flows - Utilization (I) Small Buffers are sufficient - OC3 Line, ~100ms RTT 99.9% 99.5% 2× 98.0%

Impact on Router Design 10Gb/s linecard with 200,000 x 56kb/s flows Rule-of-thumb: Buffer = 2.5Gbits Requires external, slow DRAM Becomes: Buffer = 6Mbits Can use on-chip, fast SRAM Completion time halved for short-flows 40Gb/s linecard with 40,000 x 1Mb/s flows Rule-of-thumb: Buffer = 10Gbits Becomes: Buffer = 50Mbits For more details… “Sizing Router Buffers – Guido Appenzeller, Isaac Keslassy and Nick McKeown, to appear at SIGCOMM 2004 Flavio Bonomi: 10%-40% less cost, power, space on board Allows to build cards at higher line rates If we can move buffer on chip, radically changes line card design