Download presentation
Presentation is loading. Please wait.
Published byAlexandro Stain Modified over 10 years ago
1
1 Exploring Efficient and Scalable Multicast Routing in Future Data Center Networks Dan Li, Jiangwei Yu, Junbiao Yu, Jianping Wu Tsinghua University Presented by DENG Xiang
2
Outline I Introduction and background II Build an efficient multicast tree III Make multicast routing scalable IV Evaluation V Conclusion
3
Data Centers the core of cloud services online cloud applications back-end infrastructural computations servers and switches popularity of group communication Introduction and background
4
Multicast save network traffic improve application throughput
5
Internet-orieted Multicast is successful.
6
When Multicast meets data center networks... Problem A: Data center topologies usually expose high link density and traditional technologies can result in severe link waste. Problem B: Low-end commodity switches are largely used in most data center designs for economic and scalability consideration.
7
Data Center Network Architecture BCube Portland VL2 (similar to Portland) Build an efficient Multicast tree
8
BCube constructed recursively: BCube(n,0), BCube(n,1)...BCube(n,k) each server has k+1 ports each switch has n ports number of servers: n k+1
9
Portland three-level and n pods aggregation level and edge level: n/2 switches with n ports core level: (n/2) 2 switches with n ports number of servers: n 3 /4
10
Consistent themes lie in them use low-end switches in the view of expense high link density exists data center structure is built in a hierarchical and regular way
11
In order to save network traffic, how to build an efficient Multicast tree traditional receiver-driven Multicast routing protocols originally for the Internet, such as PIM approximate algorithm of Steiner tree Steiner tree problem: to build a Multicast tree with the lowest cost covering the given nodes source-driven tree building algorithm the proposed algorithm
12
group spanning graph each hop is a stage stage 0 includes the sender only stage d is composed of receivers d is the diameter of data center topology
15
Build Multicast tree in a source-to- receiver expansion way upon the group spanning graph, with the tree node set from each stage strictly covering downstream receivers definition of cover: A covers B if and only if for each node in B, there exists a directed path from a node in A A strictly covers B when A covers B and any subset of A does not cover B.
16
algorithm details in BCube: a) select the set of servers(assume the set is E) from stage 2 which are covered by sender s and a single switch in stage 1(assume it is W) b) |E| of the BCube(n,k-1)s has a server in E as the source p, and the receiver set in stage 2*(k+1) covered by p. c) the other BCube(n,k-1) has s as the source and receivers in stage 2*k covered by s but not by W as the receiver set
17
algorithm details in Portland: a) From the first stage to the stage of core-level switches, any single path can be chosen, because any single core-level switch can cover the downstream receivers. b) From the stage of core-level switches to the final stage of receivers, the paths are fixed due to the interconnection rule in PortLand.
18
a mechanism of packet forward to support massive Multicast group is necessary: in-packet Bloom Filter For only in-packet Bloom Filter, bandwidth waste is significant for large groups. in-switch forwarding table For only in-switch forwarding table, very large memory space is needed. Make Multicast routing scalable
19
The bandwidth waste of in-packet Bloom Filter comes from: the Bloom Filter field in the packet brings network bandwidth cost. false-positive forwarding by Bloom Filter causes traffic leakage. switches receiving packets by false-positive forwarding may further forward packets to other switches, incurring not only additional traffic leakage but also possible loops.
20
we define Bandwidth Overhead Ratio r to decribe in- packet Bloom Filter: p--the packet length (including the Bloom Filter field) f--the length of the in-packet Bloom Filter field t--the number of links in the Multicast tree c--the numberof actual links covered by Bloom Filter based forwarding
21
with the packet size as 1500 bytes, the relation among r, f and group size: BCube(8,3) Portland with 48-port switches
22
In-packet Bloom Filter does not accommodate large-size group. So a combination routing scheme is proposed. a) in-packet Bloom Filters are used for small-sized groups to save routing space in switches, while routing entries are installed into switches for large groups to alleviate bandwidth overhead. b) Intermediate switches/servers receiving the Multicast packet check a special TAG in the packet to determine whether to forward the packet via in-packet Bloom Filter or looking up the in-switch forwarding table.
23
two ways of in-packet Bloom Filter node-based encoding elements are the tree nodes, including switches and servers and it is chosen. link-based encoding elements are the directed physical links
24
false-positive forwarding caused by in-packet Bloom Filter may result in loops. the solution: When a node only forwards the packet to its neighboring nodes (within the Bloom Filter) whose distances to source are larger than itself.
25
Evaluation evaluation of souce-driven tree buiding algorithm: BCube(8,3) and 48-port-switch Portland; 1Gbps link speed; 200 random-sized groups; number of links in the tree computation time
26
BCube Portland
27
BCube Portland
28
evaluation of combination forwarding scheme with 32-byte Bloom Filter:
29
Conclusion Efficient and Scalable Multicast Routing in Future Data Center Networks an efficient Multicast tree building algorithm a combination forwarding scheme for salable Multicast routing
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.