Presentation is loading. Please wait.

Presentation is loading. Please wait.

FairCloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica.

Similar presentations


Presentation on theme: "FairCloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica."— Presentation transcript:

1 FairCloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica Presenter : 段雲鵬

2 2/20 Outline Introduction challenges sharing networks Properties for network sharing Mechanism Conclusion

3 3/20 Some concepts Bisection bandwidth – Each node has a unit weight – Each link has a unit weight Flow def – standard five-tuple in packet headers B denotes bandwidth T denotes traffic W denotes the weight of a VM

4 4/20 Background Resource in cloud computing – Network, CPU, memory Network allocation – More difficult Source, destination and cross traffic – Tradeoff payment proportionality VS bandwidth guarantees

5 5/20 Introduction Network allocation – Unkown to users, bad predictability Fairness issues – Flows, source-destination pairs, or sources alone, destination alone Difference with other resource – Interdependent Users – Interdependent Resources

6 6/20 Assumption From a per-VM viewpoint Be agnostic to VM placement and routing algorithms In a single datacenter Be largely orthogonal to work on network topologies to improve bisection bandwidth

7 7/20 Traditional Mechanism Per flow fairness – Unfair when simply instantiating more flow Per source-destination pair – Unfair when one VM communicates with more VMs Per source – Unfair to destinations Asymmetric – Only be fair for source or destination only

8 8/20 Examples Per source-destination pair Per source If there is little traffic on the A-F and B-E, B(A)=B(B) =B(E) =B(F) =2*B(C) =2*B(D) =B(G) =B(H) B(E) =B(F) =0.25*B(D), In the opposite direction, B(A) =B(B) =0.25*B(C)

9 9/20 Properties for network sharing(1) Strategy proofness – Can’t increase bandwidth by modifying behavior at application level Pareto Efficiency – X and Y is bottlenecked, when B(X-Y) increases, B(A-B) must decrease,otherwise congestion will be worse B A 1 M 10 M

10 10/20 Properties for network sharing(2) Non-zero Flow Allocation – A strictly +B() between each pairs are expected Independence – When T2 increase, B1 should not be affected Symmetry – If all flows’ direction are swiched, the allocation should be the same L2 L1

11 11/20 Network weight and user’s payment. Weight Fidelity(provide incentive) – Strict Monotonicity (Monotonicity) If W(VM) increases,then all its traffic must increase (not decrease). – Proportionality Guaranteed Bandwidth – Admission control They are conflicting, tradeoff Subset P(2/3) Subset Q(1/3) No communication between P and Q

12 12/20 Per Endpoint Sharing (PES) Can explicitly trade between weight fidelity and guaranteed bandwidth – N A denote the number of VMs A is communicating with – W S-D =f(W S,W D ), W A-B =W B-A – Normalized by L1 normalization Drawback : Static Method (out of discussion)

13 13/20 Example W A-D =W A /N A +W D /N D =1/2+1/2=1 W A-C =W B-D =1/2+1/1=1.5 Total Weight=4(4 VMs) So W A-D =1/4=0.25 W A-C =W B-D =1.5/4=0.325

14 14/20 Comparison

15 15/20 PES For one host, B ∝ (closer VMs) instead of (remote VMs) Higher guarantees for the worst case W A−B = W B−A =α*W A /N A + β*W B / N B – α and β can be designed to weight between bandwidth guarantees and weight fidelity

16 16/20 One Sided PES (OSPES) Designed for tree-based topology W A−B = W B−A =α*W A /N A + β*W B / N B When closer to A, α = 1 and β = 0 When closer to B, α = 0 and β = 1

17 17/20 OSPES fair sharing for the traffic towards or from the tree root – Resource allocation are depended on the root – Non-strict monotonicity When W(A) = W(B), If the access link is 1 Gbs, then each VM is guaranteed 500 Mbps W A-VM1 =1/1 W B-VMi =1/10(i=2,3……,11)

18 18/20 Max-Min Fairness The minimum data rate that a dataflow achieves is maximized – The bottleneck is fully utilized Can be applied

19 19/20 Conclusion Problem : sharing the network within a cloud computing datacenter Tradeoff between payment proportionality and bandwidth guarantees A mechanism to make tradeoff between conflicting requirements

20 Reference [1] Amazon web services. http://aws.amazon.com. [2] M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable, commodity data center network architecture. In SIGCOMM. ACM, 2008. [3] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat. Hedera: Dynamic Flow Scheduling for Data Center Networks. In NSDI, 2010. [4] H. Ballani, P. Costa, T. Karagiannis, and A. Rowstron. Towards Predictable Datacenter Networks. In ACM SIGCOMM, 2011. [5] D. P. Bertsekas and R. Gallager. Data networks (2. ed.). Prentice Hall, 1992. [6] B. Briscoe. Flow rate fairness: Dismantling a religion. ACM SIGCOMM Computer Communication Review, 2007. [7] N. G. Duffield, P. Goyal, A. G. Greenberg, P. P. Mishra, K. K. Ramakrishnan, and J. E. van der Merwe. A flexible model for resource management in virtual private networks. In SIGCOMM, 1999. [8] A. Ghodsi, M. Zaharia, B. Hindman, et al. Dominant resource fairness: fair allocation of multiple resource types. In USENIX NSDI, 2011. [9] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta. VL2: A Scalable and Flexible Data Center

21 21/20 Thanks !!


Download ppt "FairCloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica."

Similar presentations


Ads by Google