Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Presented by Shaddi.

Slides:



Advertisements
Similar presentations
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research.
Advertisements

B 黃冠智.
Deconstructing Datacenter Packet Transport Mohammad Alizadeh, Shuang Yang, Sachin Katti, Nick McKeown, Balaji Prabhakar, Scott Shenker Stanford University.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Modified by Feng.
Lecture 18: Congestion Control in Data Center Networks 1.
CSIT560 Internet Infrastructure: Switches and Routers Active Queue Management Presented By: Gary Po, Henry Hui and Kenny Chong.
Fixing TCP in Datacenters Costin Raiciu Advanced Topics in Distributed Systems 2011.
PFabric: Minimal Near-Optimal Datacenter Transport Mohammad Alizadeh Shuang Yang, Milad Sharif, Sachin Katti, Nick McKeown, Balaji Prabhakar, Scott Shenker.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
Balajee Vamanan et al. Deadline-Aware Datacenter TCP (D 2 TCP) Balajee Vamanan, Jahangir Hasan, and T. N. Vijaykumar.
1 Updates on Backward Congestion Notification Davide Bergamasco Cisco Systems, Inc. IEEE 802 Plenary Meeting San Francisco, USA July.
Mohammad Alizadeh Adel Javanmard and Balaji Prabhakar Stanford University Analysis of DCTCP:Analysis of DCTCP: Stability, Convergence, and FairnessStability,
IETF87 Berlin DCTCP implementation in FreeBSD
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Bertha & M Sadeeq.  Easy to manage the problems  Scalability  Real time and real environment  Free data collection  Cost efficient  DCTCP only covers.
Congestion control in data centers
Albert Greenberg, Cheng Huang, Randy Kern, Dave Maltz, Jitu Padhye, Parveen Patel, Lihua Yuan *with help from MurariS and others in COSD.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
Defense: Christopher Francis, Rumou duan Data Center TCP (DCTCP) 1.
SEDCL: Stanford Experimental Data Center Laboratory.
Random Early Detection Gateways for Congestion Avoidance
Data Center TCP (DCTCP)
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
Mohammad Alizadeh, Abdul Kabbani, Tom Edsall,
Mohammad Alizadeh Stanford University Joint with: Abdul Kabbani, Tom Edsall, Balaji Prabhakar, Amin Vahdat, Masato Yasuda HULL: High bandwidth, Ultra Low-Latency.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan by liyong Data.
TCP & Data Center Networking
TCP Incast in Data Center Networks
Curbing Delays in Datacenters: Need Time to Save Time? Mohammad Alizadeh Sachin Katti, Balaji Prabhakar Insieme Networks Stanford University 1.
Detail: Reducing the Flow Completion Time Tail in Datacenter Networks SIGCOMM PIGGY.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
B 李奕德.  Abstract  Intro  ECN in DCTCP  TDCTCP  Performance evaluation  conclusion.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Packet Transport Mechanisms for Data Center Networks Mohammad Alizadeh NetSeminar (April 12, 2012) Mohammad Alizadeh NetSeminar (April 12, 2012) Stanford.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
DCTCP & CoDel the Best is the Friend of the Good Bob Briscoe, BT Murari Sridharan, Microsoft IETF-84 TSVAREA Jul 2012 Le mieux est l'ennemi du bien Voltaire.
Kai Chen (HKUST) Nov 1, 2013, USTC, Hefei Data Center Networking 1.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Analysis and Design of an Adaptive Virtual Queue (AVQ) Algorithm for AQM By Srisankar Kunniyur & R. Srikant Presented by Hareesh Pattipati.
6.888: Lecture 3 Data Center Congestion Control Mohammad Alizadeh Spring
MMPTCP: A Multipath Transport Protocol for Data Centres 1 Morteza Kheirkhah University of Edinburgh, UK Ian Wakeman and George Parisis University of Sussex,
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research.
15-744: Computer Networking L-14 Data Center Networking III.
Other Methods of Dealing with Congestion
Data Center TCP (DCTCP)
Data Center TCP (DCTCP)
15-744: Computer Networking
Incast-Aware Switch-Assisted TCP Congestion Control for Data Centers
OTCP: SDN-Managed Congestion Control for Data Center Networks
HyGenICC: Hypervisor-based Generic IP Congestion Control for Virtualized Data Centers Conference Paper in Proceedings of ICC16 By Ahmed M. Abdelmoniem,
Congestion Control: The Role of the Routers
Router-Assisted Congestion Control
Packet Transport Mechanisms for Data Center Networks
Microsoft Research Stanford University
Hamed Rezaei, Mojtaba Malekpourshahraki, Balajee Vamanan
Other Methods of Dealing with Congestion
AMP: A Better Multipath TCP for Data Center Networks
Other Methods of Dealing with Congestion
Data Center TCP (DCTCP)
SICC: SDN-Based Incast Congestion Control For Data Centers Ahmed M
Lecture 16, Computer Networks (198:552)
Reconciling Mice and Elephants in Data Center Networks
Lecture 17, Computer Networks (198:552)
CS 401/601 Computer Network Systems Mehmet Gunes
AMP: An Adaptive Multipath TCP for Data Center Networks
Data Centers.
Presentation transcript:

Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Presented by Shaddi Hasan Microsoft Research Stanford University Data Center TCP (DCTCP) 1

Case Study: Microsoft Bing Measurements from 6000 server production cluster Instrumentation passively collects logs ‒Application-level ‒Socket-level ‒Selected packet-level More than 150TB of compressed data over a month 2

TLA MLA Worker Nodes ……… Partition/Aggregate Application Structure 3 Picasso “Everything you can imagine is real.”“Bad artists copy. Good artists steal.” “It is your work in life that is the ultimate seduction.“ “The chief enemy of creativity is good sense.“ “Inspiration does exist, but it must find you working.” “I'd like to live as a poor man with lots of money.“ “Art is a lie that makes us realize the truth. “Computers are useless. They can only give you answers.” ….. 1. Art is a lie… 2. The chief… 3. … Art is a lie… 3. ….. Art is… Picasso Time is money  Strict deadlines (SLAs) Missed deadline  Lower quality result Deadline = 250ms Deadline = 50ms Deadline = 10ms

Workloads Partition/Aggregate (Query) Short messages [50KB-1MB] ( C oordination, Control state) Large flows [1MB-50MB] ( D ata update) 4 Delay-sensitive Throughput-sensitive

Impairments Incast Queue Buildup Buffer Pressure 5

Incast 6 TCP timeout Worker 1 Worker 2 Worker 3 Worker 4 Aggregator RTO min = 300 ms Synchronized mice collide.  Caused by Partition/Aggregate.

Incast Really Happens Requests are jittered over 10ms window. Jittering switched off around 8:30 am. 7 Jittering trades off median against high percentiles th percentile is being tracked. MLA Query Completion Time (ms)

Queue Buildup 8 Sender 1 Sender 2 Receiver Big flows buildup queues.  Increased latency for short flows. Measurements in Bing cluster  For 90% packets: RTT < 1ms  For 10% packets: 1ms < RTT < 15ms

Data Center Transport Requirements 9 1. High Burst Tolerance –Incast due to Partition/Aggregate is common. 2. Low Latency –Short flows, queries 3. High Throughput –Continuous data updates, large file transfers The challenge is to achieve these three together.

Tension Between Requirements 10 High Burst Tolerance High Throughput Low Latency DCTCP Deep Buffers:  Queuing Delays Increase Latency Shallow Buffers:  Bad for Bursts & Throughput Reduced RTO min (SIGCOMM ‘09)  Doesn’t Help Latency AQM – RED:  Avg Queue Not Fast Enough for Incast Objective: Low Queue Occupancy & High Throughput

The DCTCP Algorithm 11

Nugget time TCP causes problems because it tries to fill buffers –Congestion is signaled by loss  buffer overflow. 2. Use ECN to signal congestion, not loss –Keeps buffers short, reduces retransmissions

Review: The TCP/ECN Control Loop 13 Sender 1 Sender 2 Receiver ECN Mark (1 bit) ECN = Explicit Congestion Notification

Small Queues & TCP Throughput: The Buffer Sizing Story 17 Bandwidth-delay product rule of thumb: – A single flow needs buffers for 100% Throughput. B Cwnd Buffer Size Throughput 100%

Small Queues & TCP Throughput: The Buffer Sizing Story 17 Bandwidth-delay product rule of thumb: – A single flow needs buffers for 100% Throughput. Appenzeller rule of thumb (SIGCOMM ‘04): – Large # of flows: is enough. B Cwnd Buffer Size Throughput 100%

Small Queues & TCP Throughput: The Buffer Sizing Story 17 Bandwidth-delay product rule of thumb: – A single flow needs buffers for 100% Throughput. Appenzeller rule of thumb (SIGCOMM ‘04): – Large # of flows: is enough. Can’t rely on stat-mux benefit in the DC. – Measurements show typically 1-2 big flows at each server, at most 4.

Small Queues & TCP Throughput: The Buffer Sizing Story 17 Bandwidth-delay product rule of thumb: – A single flow needs buffers for 100% Throughput. Appenzeller rule of thumb (SIGCOMM ‘04): – Large # of flows: is enough. Can’t rely on stat-mux benefit in the DC. – Measurements show typically 1-2 big flows at each server, at most 4. B Real Rule of Thumb: Low Variance in Sending Rate → Small Buffers Suffice

Two Key Ideas 1.React in proportion to the extent of congestion, not its presence. Reduces variance in sending rates, lowering queuing requirements. 2.Mark based on instantaneous queue length. Fast feedback to better deal with bursts. 18 ECN MarksTCPDCTCP Cut window by 50%Cut window by 40% Cut window by 50%Cut window by 5%

Data Center TCP Algorithm Switch side: – Mark packets when Queue Length > K. 19 Sender side: – Maintain running average of fraction of packets marked (α). In each RTT:  Adaptive window decreases: – Note: decrease factor between 1 and 2. B K Mark Don’t Mark

DCTCP in Action 20 Setup: Win 7, Broadcom 1Gbps Switch Scenario: 2 long-lived flows, K = 30KB (Kbytes)

Why it Works 1.High Burst Tolerance Large buffer headroom → bursts fit. Aggressive marking → sources react before packets are dropped. 2. Low Latency Small buffer occupancies → low queuing delay. 3. High Throughput ECN averaging → smooth rate adjustments, low variance. 21

Baseline 25 Background FlowsQuery Flows

Baseline 25 Background FlowsQuery Flows Low latency for short flows.

Baseline 25 Background FlowsQuery Flows Low latency for short flows. High throughput for long flows.

Baseline 25 Background FlowsQuery Flows Low latency for short flows. High throughput for long flows. High burst tolerance for query flows.

Conclusions DCTCP satisfies all our requirements for Data Center packet transport. Handles bursts well Keeps queuing delays low Achieves high throughput Features: Very simple change to TCP and a single switch parameter. Based on mechanisms already available in Silicon. 27

Thoughts Convergence times? Actually requires persistent connections – Section 4.2.2: “All communication is over long- lived connections, so there is no three-way handshake for each request.” TCP will blow DCTCP away if sharing a buffer 27

Thoughts This is basically the right answer for handling incast and meeting deadlines – Use ECN to keep queues short, rather than filling up your buffers and waiting for loss. – D 3 incorporates explicit knowledge of deadlines, but this complicates API and necessitates more changes. – Complementary to MPTCP (or any other TCP for that matter) 28

Scaled Background & Query 10x Background, 10x Query 26

Thoughts “The key contribution here is […] the act of deriving multi-bit feedback from the information present in the single-bit sequence of marks.” – Calculating a moving average is not a contribution. – Using ECN + clever scheme for adjusting CWIN is! 31

Analysis How low can DCTCP maintain queues without loss of throughput? How do we set the DCTCP parameters? 22  Need to quantify queue size oscillations (Stability). 85% Less Buffer than TCP