Download presentation
Presentation is loading. Please wait.
Published byBertram Stephens Modified over 9 years ago
1
Presenter: r00945020@ntu.edu.tw Po-Chun Wu
2
Outline Introduction BCube Structure BCube Source Routing (BSR) Other Design Issues Graceful degradation Implementation and Evaluation Conclusion
3
Introduction
4
Container-based modular DC 1000-2000 servers in a single container Core benefits of Shipping Container DCs: – Easy deployment High mobility Just plug in power, network, & chilled water – Increased cooling efficiency – Manufacturing & H/W Admin. Savings
5
BCube design goals High network capacity for: – One-to-one unicast – One-to-all and one-to-several reliable groupcast – All-to-all data shuffling Only use low-end, commodity switches Graceful performance degradation – Performance degrades gracefully as servers/switches failure increases
6
BCube Structure
7
000102 03 101112 13 202122 23 303132 33 BCube0 BCube1 server switch Level-1 Level-0 Connecting rule - The i-th server in the j-th BCube 0 connects to the j-th port of the i-th level-1 switch -Server “13” is connected to switches and A BCube k has: - K+1 levels: 0 through k - n-port switches, same count at each level (n k ) - n k+1 total servers, (k+1)n k total switches. -(n=8,k=3 : 4-levels connecting 4096 servers using 512 8-port switches at each layer.) A server is assigned a BCube addr (a k,a k-1,…,a 0 ) where a i [0,k] Neighboring server addresses differ in only one digit Switches only connect to servers A BCube k has: - K+1 levels: 0 through k - n-port switches, same count at each level (n k ) - n k+1 total servers, (k+1)n k total switches. -(n=8,k=3 : 4-levels connecting 4096 servers using 512 8-port switches at each layer.) A server is assigned a BCube addr (a k,a k-1,…,a 0 ) where a i [0,k] Neighboring server addresses differ in only one digit Switches only connect to servers
8
Bigger BCube: 3-levels (k=2) BCube2 BCube1
9
MAC addr Bcube addr 000102 03 101112 13 202122 23 303132 33 BCube0 BCube1 MAC030 MAC131 MAC232 MAC333 port Switch MAC table MAC200 MAC211 MAC222 MAC233 port Switch MAC table BCube: Server centric network MAC23MAC03 2003 data MAC23MAC03 2003 data dstsrc MAC20MAC23 2003 data MAC20MAC23 2003 data Server-centric BCube - Switches never connect to other switches and only connect to servers - Servers control routing, load balancing, fault- tolerance Server-centric BCube - Switches never connect to other switches and only connect to servers - Servers control routing, load balancing, fault- tolerance
10
Bandwidth-intensive application support One-to-one: – one server moves data to another server. (disk backup) One-to-several: – one server transfers the same copy of data to several receivers. (distributed file systems) One-to-all: – a server transfers the same copy of data to all the other servers in the cluster (boardcast) All-to-all: – very server transmits data to all the other servers (MapReduce)
11
Multi-paths for one-to-one traffic T HEOREM 1. The diameter(longest path) of a BCube k is k+1 T HEOREM 3. There are k+1 parallel paths between any two servers in a BCube k 000102 03 101112 13 202122 23 303132 33
12
Speedup for one-to-several traffic T HEOREM 4. Server A and a set of servers {d i |d i is A’s level-i neighbor} form an edge disjoint complete graph of diameter 2 000102 03 101112 13 202122 23 303132 33 P1 P2 P1P2P1 P2 Writing to ‘r’ servers, is r-times faster than pipeline replication
13
Speedup for one-to-all traffic T HEOREM 5. There are k+1 edge-disjoint spanning trees in a Bcube k src 00 030201 10131211 20232221 30333231 The one-to-all and one-to- several spanning trees can be implemented by TCP unicast to achieve reliability
14
Aggregate bottleneck throughput for all-to-all traffic The flows that receive the smallest throughput are called the bottleneck flows. Aggregate bottleneck throughput (ABT) – ( the bottleneck flow) * ( the number of total flows in the all-to-all traffic ) Larger ABT means shorter all-to-all job finish time. T HEOREM 6. The ABT for a BCube network is where n is the switch port number and N is the total server number In BCube there are no bottlenecks since all links are used equally
15
Bcube Source Routing (BSR)
16
BCube Source Routing (BSR) Server-centric source routing – Source server decides the best path for a flow by probing a set of parallel paths – Source server adapts to network condition by re-probing periodically or due to failures – Intermediate servers only forward the packets based on the packet header. source intermediate K+1 path Probe packet destination
17
BSR Path Selection Source server: – 1.construct k+1 paths using BuildPathSet – 2.Probes all these paths (no link status broadcasting) – 3.If a path is not found, it uses BFS to find alternative (after removing all others) Intermediate servers: – Updates Bandwidth: min(PacketBW, InBW, OutBW) – If next hops is not found, returns failure to source Destination server: – Updates Bandwidth: min(PacketBW, InBW) – Send probe response to source on reverse path 4.Use a metric to select best path. (maximum available bandwidth / end-to-end delay)
18
Path Adaptation Source performs path selection periodically (every 10 seconds) to adapt to failures and network condition changes. If a failure is received, the source switches to an available path and waits for next timer to expire for the next selection round and not immediately. Usually uses randomness in timer to avoid path oscillation.
19
Packet Forwarding Each server has two components: – Neighbor status table (k+1)x(n-1) entries Maintained by the neighbor maintenance protocol (updated upon probing / packet forwarding) Uses NHA(next hop index) encoding for indexing neighbors ([DP:DV]) – DP: diff digit (2bits) – DV: value of diff digit (6 bits) – NHA Array (8 bytes: maximun diameter = 8) Almost static (except Status) – Packet forwarding procedure Intermediate servers update next hop MAC address on packet if next hop is alive Intermediate servers update status from packet One table lookup
20
Path compression and fast packet forwarding 000102 03 101112 13 202122 23 303132 33 Traditional address array needs 16 bytes: Path(00,13) = {02,22,23,13} Forwarding table of server 23 The Next Hop Index (NHI) Array needs 4 bytes: Path(00,13)={0:2,1:2,0:3,1:1} NHIOutput portMAC 0:00Mac20 0:10Mac21 0:20Mac22 1:01Mac03 1:11Mac13 1:31Mac33 2 3 1 3 Fwd node Next hop 20
21
Other Design Issues
22
Partial Bcube k 000102 03 101112 13 BCube0 BCube1 Level-1 Level-0 (1) build the need BCube k−1 s (2) use partial layer-k switches ? Solution – connect the BCube k−1 s using a full layer-k switches. Advantage – BCubeRouting performs just as in a complete BCube, and BSR just works as before. Disadvantage – switches in layer-k are not fully utilized. Solution – connect the BCube k−1 s using a full layer-k switches. Advantage – BCubeRouting performs just as in a complete BCube, and BSR just works as before. Disadvantage – switches in layer-k are not fully utilized.
23
Packing and Wiring (1/2) 2048 servers and 1280 8-port switches – a partial BCube with n = 8 and k = 3 40 feet container (12m*2.35m*2.38m) 32 racks in a container
24
Packing and Wiring (2/2) One rack = BCube 1 Each rack has 44 units – 1U = 2 servers or 4 switches – 64 servers occupy 32 units – 40 switches occupy 10 units Super-rack(8 racks) = BCube 2
25
Routing to external networks (1/2) Ethernet has two levels link rate hierarchy – 1G for end hosts and 10G for uplink aggregator gateway 000102 03 101112 13 202122 23 303132 33 10G 31112101 1G
26
Routing to external networks (2/2) When an internal server sends a packet to an external IP address 1)choose one of the gateways. 2)The packet is then routed to the gateway using BSR (BCube Source Routing) 3)After the gateway receives the packet, it strips the BCube protocol header and forwards the packet to the external network via the 10G uplink
27
Graceful degradation
28
DCell fat-tree
29
Graceful degradation Server failure Switch failure BCube DCell Fat-tree BCube Fat-tree Graceful degradation : when server or switch failure increases, ABT reduces slowly and there are no dramatic performance falls. (Simulation Based)
30
Implementation and Evaluation
31
hardware IF 0IF 1 IF k Ethernet miniport driver TCP/IP protocol driver BCube configuration server ports BCube driver BSR path probing & selection Flow-path cache Neighbor maintenance Ava_band calculation Packet send/recv app kernel packet fwd software Neighbor maintenance Neighbor maintenance Ava_band calculation packet fwd Implementation Intermediate driver
32
Testbed A BCube testbed – 16 servers (Dell Precision 490 workstation with Intel 2.00GHz dualcore CPU, 4GB DRAM, 160GB disk) – 8 8-port mini-switches (DLink 8-port Gigabit switch DGS-1008D) NIC – Intel Pro/1000 PT quad-port Ethernet NIC – NetFPGA Intel® PRO/1000 PT Quad Port Server Adapter NetFPGA
33
Bandwidth-intensive application support Per-server throughput
34
Support for all-to-all traffic Total throughput for all-to-all
35
Related work Speedup
36
Conclusion BCube is a novel network architecture for shipping-container-based MDC Forms a server-centric architecture network Use mini-switches instead of 24 port switches BSR enables graceful degradation and meets the special requirements of MDC
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.