Presentation is loading. Please wait.

Presentation is loading. Please wait.

Routers with Very Small Buffers

Similar presentations


Presentation on theme: "Routers with Very Small Buffers"— Presentation transcript:

1 Routers with Very Small Buffers
Yashar Ganjali Stanford University Joint work with: Mihaela Enacescu, Ashish Goel, Nick McKeown, and Tim Roughgarden Presented by Arjumand Younus, Good afternoon. For the next 20 minutes or so, I’m going to talk about routers with very small buffers. The goal is to find out how small we can make buffers in the Internet routers without a major degradation in network performance. This is joint work with …

2 Routers with Very Small Buffers, INFOCOM 2006
Outline (1/2) Background and Problem Statement Motivation The Router Buffer Story How Much Buffering Do We Need? Single TCP Flow Many TCP Flows Buffer Size – Theory vs. Practice Small Buffers Scenario Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

3 Routers with Very Small Buffers, INFOCOM 2006
Outline (2/2) Intuitive Explanation of O(log W) Buffer Size – Leaky Bucket TCP Reno Paced TCP Simulations with O(log W) Buffers Conclusion Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

4 Background and Problem Statement
Congestion Control Buffering - first component of any congestion control solution. Buffers ensure that link is utilized 100%. The Problem: How much buffering? – Is sparking much debates recently. Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

5 Motivation - Networks with Little or No Buffers (1/2)
Problem Internet traffic is doubled every year Disparity between traffic and router growth (space, power, cost) Possible Solution All-Optical Networking Consequences Large capacity  large traffic Little or no buffers Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

6 Motivation - Why Does Buffer Size Matter? (2/2)
End to end latency: Transmission delay Propagation delay Queuing delay Buffers are costly. 1/2 board space of routers 1/3 power consumption Small buffers: On chip  higher density Lower cost The only variable component of latency Let me start with why buffer sizing matters at all in the first place. As we all know, the end-to-end latency is a very important performance metric which quantifies the behavior of the network as seen by the end users. End-to-end latency of a given packet consists of three main components: Transmission delay, which is the time it takes for a packet to be transmitted by the source host, and by the intermediate routers on its path. Propagation delay, which is the time it takes for the packet to traverse the links connecting the routers; and The queueing delay. This is the time that the packet sits in a buffer and waits for some resource to be released. Of these three components, the first two (i.e. the transmission delay and the propagation delay) are fixed. Queueing delay is the only variable component of the end-to-end latency. Therefore it is what causes the variation in performance observed by the end users. And since queueing delay and its variations is directly related to the buffer sizes and we need to understand the buffer sizing problem if we want to control the queueing delay of packets. There is another reason why buffer sizing problem is important. Buffers are a major part of today’s routers. Billions of dollars are spent each year on router/switch buffers. In fact, about 75% of all fast SRAMs are bought by Cisco and almost all of this goes into routers and switches. Also, today almost half of the board space of any router is occupied by buffers. In addition to these, 1/3 of the power used by routers is consumed by the buffers. Given the fact that routers have a very high power consumption, this can be very costly for an Internet service provider. Now, if we are somehow able to reduce the buffer sizes We might be able to push the buffers inside the processor which performs the switching. This eliminates a lot of board space, and significantly reduces the complexity of the router architecture. We will have higher density and therefore higher throughput. Of course, other side-effects are lowering the cost, making the system more scalable, and more importantly, less delay and jitter for the user flows. As you can see buffer sizing problem has a very significant impact on the performance of the network, as well as on the cost and complexity of routers. Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

7 Simulated Many TCP Flows
The Story 1,000,000 10,000 20 # packets at 10Gb/s Sawtooth Peak-to-trough Smoothing of many sawtooths Non-bursty arrivals Intuition & Proofs After this relatively long introduction, let me give an overview of the rest of my presentation. I'll talk about three different rules for sizing router buffers. The first rule is the rule-of-thumb which I just described. As I mentioned, this rule is based on the assumption that we want to have 100% link utilization at the core links. The second rule is a more recent result proposed by Appenzeller, Keslassy, and McKeown which basically challenges the original rule-of-thumb. Based on this rule if we have N flows going through the router, we can reduce the buffer size by a factor of sqrt(N) The underlying assumption is that we have a large number of flows, and the flows are desynchronized. Finally, the third rule which I’ll talk about today, says that If we are willing to sacrifice a very small amount of throughput, i.e. if having a throughput less than 100% is acceptable, We might be able to reduce the buffer sizes significantly to just O(log(W)) packets. Here W is the maximum congestion window size. If we apply each of these rules to a 10Gb/s link We will need to buffer 1,000,000 packets based on the first rule, About 10,000 packets based on the 2nd one, And only 20 packets based on the 3rd rule. For the rest of this presentation I’ll show you the intuition behind each of these rules; and Will provide some evidence that validates the rule. Let’s start with the rule-of-thumb. Simulated Many TCP Flows Evidence Assume: Large number of desynchronized flows; 100% utilization Assume: Large number of flows; <100% utilization October 2005 Routers with Very Small Buffers, INFOCOM 2006

8 How Much Buffering Do We Need?
Universally applied rule-of-thumb: A router needs a buffer size: 2T is the two-way propagation delay (or just 250ms) C is capacity of bottleneck link Context Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines. Usually referenced to Villamizar and Song: “High Performance TCP in ANSNET”, CCR, 1994. Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design Source Destination C 2T Having said that, one might think the buffer sizing problem must be very well understood. After all, we are equipped with tools like queueing theory, large deviations theory, and mean field theory which are focused on solving exactly this type of problem. You would think this is simply a matter of understanding the random process that describes the queue occupancy over time. Unfortunately, this is not the case. The closed-loop nature of the flows, and the fact that flows react to the state of the system makes it necessary to use control theoretic tools, but those tools emphasize on the equilibrium state of the system, and fail to describe transient delays. So if the problem is not easy, what do people do in practice? Buffer sizes in today’s Internet routers are set based on a rule-of-thumb which says If we want the core routers to have 100% utilization, The buffer size should be greater than or equal to 2TxC Here 2T is the two way propagation delay of packets going through the router And C is the capacity of the target link. Note that if the capacity of the network is increased, based on this rule, we need to increase the buffer size linearly with capacity. We don’t expect the propagation delay changed that much over time, but we expect the capacity to grow very rapidly, Therefore, this rule can have major consequences in router design, and that’s exactly why today’s routers have so much buffering as I showed you a few moments ago. Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

9 Single TCP Flow Rule for adjusting W If an ACK is received: W ← W+1/W
If a packet is lost: W ← W/2 Only W packets may be outstanding October 2005 Routers with Very Small Buffers, INFOCOM 2006

10 Single TCP Flow Rule for adjusting W If an ACK is received: W ← W+1/W
If a packet is lost: W ← W/2 Only W packets may be outstanding Source Dest t Window size October 2005 Routers with Very Small Buffers, INFOCOM 2006

11 Routers with Very Small Buffers, INFOCOM 2006
Many TCP Flows Probability Distribution B Buffer Size October 2005 Routers with Very Small Buffers, INFOCOM 2006

12 Smooth Traffic - Theory
Theory: For smooth traffic very small buffers are enough. Poisson Traffic Loss independent of link rate, RTT, number of flows, etc. M/D/1 Poisson B D Can we make traffic look “Poisson-enough” when it arrives to the routers…? Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

13 Large Buffers - Practice
Typical OC192 router linecard buffers over 1,000,000 packets Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

14 Small Buffers with Paced Injections
Assume: Buffer Size > Distance between consecutive packets of a single flow S > Limited injection rate Flows are not synchronized Start times picked randomly and independently Intuition + Consequences (Example) Window size t Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

15 Small Buffers – Realistic Scenario
Assumptions: Internet core is over-provisioned Example: Load < 80% There is spacing between packets of the same flow: Natural: Slow access links Artificial: Paced TCP Result: Traffic is very smooth, and loss rate is very low, independent of RTT, and number of flows. With a buffer size of about 20 packets we can gain high throughput. Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

16 Leaky Bucket – Paced vs. Reno
Bucket drains with a constant rate. Load is 90% for both cases. October 2005 Routers with Very Small Buffers, INFOCOM 2006

17 TCP Reno TCP Reno sends packets in a burst  High drop rate
October 2005 Routers with Very Small Buffers, INFOCOM 2006

18 Paced TCP Spacing packets  Much lower drop rate October 2005
Routers with Very Small Buffers, INFOCOM 2006

19 Simulations with O(log W) Buffers
Regular TCP TCP With Pacing October 2005 Routers with Very Small Buffers, INFOCOM 2006

20 Simulations with O(log W) Buffers
Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

21 Simulations with O(log W) Buffers
October 2005 Routers with Small Buffers

22 Routers with Very Small Buffers, INFOCOM 2006
Conclusion Very small buffers are OK if: Sacrifice 10-20% throughput Pacing: natural, or TCP modification Major consequences for electronic routers: Board space reduction Power reduction Increased density Opens doors to all-optical networking. Experimental validation is in progress. We have scratched the surface of this problem. Recently, there has been a lot of interest in this problem, and hopefully that will lead to a better understanding of the problem, and better networks for the future. Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006

23 Routers with Very Small Buffers, INFOCOM 2006
Thank You! Questions? Yashar Ganjali Routers with Very Small Buffers, INFOCOM 2006


Download ppt "Routers with Very Small Buffers"

Similar presentations


Ads by Google