Data Center Networking with Multipath TCP UCL Data Center Networking with Multipath TCP Costin Raiciu University College London & Universitatea Politehnica Bucuresti Christopher Pluntke, UCL Adam Greenhalgh, UCL Sebastien Barre, Universite Catholique Louvain Damon Wischik. UCL Mark Handley, UCL
Data Center Networking Today Resource Allocation TCP Path Selection Random load balancing Routing OSPF, VLANs, TRILL Show multiple paths between servers Say that network is rearrangeably non blocking Clos Topology FatTree, VL2, BCube, multi-rooted tree 2
Data Center Networking Tomorrow Resource Allocation Multipath TCP TCP Path Selection Random load balancing Routing OSPF, VLANs, TRILL Show multiple paths between servers Say that network is rearrangeably non blocking Clos Topology FatTree, VL2, BCube, multi-rooted tree 3
Data Centers are Important Cloud computing Economies of scale: networks of tens of thousands of hosts Cool apps Web search, GFS, BigTable, DryadLINQ, MapReduce Dense traffic patterns
Flexibility is Important in Data Centers Apps distributed across thousands of machines. Flexibility: want any machine to be able to play any role. But: Traditional data center topologies are tree based. Don’t cope well with non-local traffic patterns. Many recent proposals for better topologies.
Traditional Data Center Topology Core Switch 10Gbps Aggregation Switches 10Gbps Top of Rack Switches 1Gbps Racks of servers …
Fat Tree Topology [Fares et al., 2008; Clos, 1953] K=4 Aggregation Switches 1Gbps K Pods with K Switches each 1Gbps Show multiple paths between servers Say that network is rearrangeably non blocking Clos Racks of servers 7
VL2 Topology [Greenberg et al, 2009, Clos topology] 10Gbps … 10Gbps 20 hosts
BCube Topology [Guo et al, 2009]
How Do We Use this Capacity? Need to distribute flows across paths. Basic solution: Random Load Balancing. Use Equal-Cost Multipath (ECMP) routing. Hash to a path at random. Use many differently rooted VLANs. End-host hashes to a VLAN; determines path.
Collisions Racks of servers 1Gbps 1Gbps Show multiple paths between servers Say that network is rearrangeably non blocking Clos Racks of servers 11
Can MPTCP self-optimize data-center traffic? With Multipath TCP we can explore many paths: Instead of using one random path, use many random paths Don’t worry about collisions. Just don’t send (much) traffic on colliding paths
Simulation Setup ~8000 hosts Long-lived flows Permutation traffic matrix Each hosts sends and receives from a single other randomly chosen host Smallest amount of traffic that can fill the network
Multipath TCP in the Fat Tree Topology Throughput Allocation
Performance depends on topology VL2 BCube
Overloaded Fat Tree: better fairness with Multipath TCP
Centralized Scheduling With RLB, it’s really hard to utilize FatTree. Hedera [Fares et al.,2010] uses a centralized scheduler and flow switching. Start by using RLB Measure all flow throughput periodically. Any flow using more than 10% of its interface rate is explicitly scheduled onto an unloaded link. How does centralized scheduling compare with MPTCP?
MPTCP vs Centralized Dynamic Scheduling Centralized Scheduling MPTCP Animate tough part of the graph Infinite Scheduling Interval 18
Can’t we just use many TCP connections? Loss rate of MP-TCP (“linked”) vs multiple uncoupled TCP flows Retransmit timeouts with MPTCP (“linked”) vs uncoupled TCP flows
MPTCP Linked Increases in DCs Better fairness and less aggressive than uncoupled TCP Improves throughput in dense traffic in BCube (25%)
? The bigger picture Resource Allocation Path Selection Routing Multipath TCP Path Selection Routing OSPF, VLANs, etc. Topology ? FatTree, VL2, Bcube, multi-rooted tree
Multipath TCP can utilize topologies TCP can’t Requirement: a subset of hosts should be able to communicate at 10Gb/s 1Gb/s Show multiple paths between servers Say that network is rearrangeably non blocking Clos 10Gb/s 22
Multipath TCP can utilize topologies TCP can’t [2] Problem ToR switch failures wipe out tens of hosts Repair time is on the order of days Solution: use two ToRs/rack, multi-home servers Single path TCP Single flows still get same max throughput Which interface do I use? With Multipath TCP Flows double their maximum throughput Path selection automatic Show multiple paths between servers Say that network is rearrangeably non blocking Clos 23
Summary Data center networking offers many paths between end-hosts. Yet: Random Load Balancing does a poor job of utilizing them Centralized scheduling is laggy and has inherently limited knowledge Multipath TCP naturally optimizes data center networks: Improves throughput Improves fairness More robust than centralized scheduling Question: what topologies does multipath TCP enable?
Backup Slides
Centralized Scheduling: Setting the Threshold Throughput Hope 1Gbps 17% worse than multipath TCP App Limited 100Mbps
Centralized Scheduling: Setting the Threshold Throughput 1Gbps 21% worse than multipath TCP App Limited 100Mbps Hope
Centralized Scheduling: Setting the Threshold Throughput 1Gbps 500Mbps 45% 51% 100Mbps 17% 21%