Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Load Balancing for Key-Value Storage Systems Imranul Hoque Michael Spreitzer Malgorzata Steinder.

Similar presentations


Presentation on theme: "Distributed Load Balancing for Key-Value Storage Systems Imranul Hoque Michael Spreitzer Malgorzata Steinder."— Presentation transcript:

1 Distributed Load Balancing for Key-Value Storage Systems Imranul Hoque Michael Spreitzer Malgorzata Steinder

2 2

3 Key-Value Storage Systems Usage: – Session state, tags, comments, etc. Requirements: – Scalability – Fast response time – High availability & fault tolerance – Relaxed consistency guarantee Example: Cassandra, Dynamo, PNUTS, etc. 3

4 Load Balancing in K-V Storage Hash partitioned vs. range partitioned – Range partitioned data ensures efficient range scan/search – Hash partitioned data helps even distribution Server 1 Server 2 Server 3 Server 4 SAT TUE SUN MON WED THU FRI MON TUE WED THU FRI SAT SUN Tablets Table 4

5 Issues with Load Balancing Uneven space distribution due to range partitioning – Solution: partition the tablets and move them around Few number of very popular records Server 1 Server 2 Server 3 Server 4 SAT TUE SUN MON WED THU FRI 5

6 Contribution Algorithms for solving the load balancing problem – Load = space, bandwidth – Evenly distribute the spare capacity – Distributed algorithm, not a centralized one – Reduce the number of moves Previous solutions: – One dimensional/key-space redistribution/bulk loading 6

7 Outline Motivation System modeling and assumptions Algorithms – One-to-one – One-to-n – Move suppression Design decisions Experimental results Emulation of proposed distributed algorithms Future works 7

8 System Modeling and Assumptions Table Tablet Server A Server B Server C B 1, S 1 B 2, S 2 B 3, S 3 B A, S A B B, S B B C, S C 8 1.<= 0.01 in both dimensions 2. # of tablets >> # of nodes 1.<= 0.01 in both dimensions 2. # of tablets >> # of nodes B 1, S 1 B 4, S 4 B 5, S 5

9 System State B B SS Target Zone: helps achieve convergence Target Point Goal: Move tablets around so that every server is within the target zone 9

10 Load Balancing Algorithms Phase 1: – Global averaging scheme – Variance of the approximation of the average decreases exponentially fast Phase 2: – One-to-one gossip – One-to-n gossip – Move suppression Phase 1 Phase 2 Phase 1 Phase 2 t 10

11 One-to-One Gossip Point selection strategy – Midpoint strategy – Greedy strategy Tablet transfer strategy – Move to the selected point with minimum cost (space transferred) 11

12 Tablet Transfer Strategy Server 2 Server 1 Target for Server 1 B B SS 12

13 Tablet Transfer Strategy (2) Server 1 Left Right Start with an empty bag Goal: take vectors from the servers so that they add up to the target vector If slope(bag + left + right) < slope(target): – Add right to bag, move right – Otherwise, add left to bag move left 13

14 Initial Configurations Uniform Two Extreme Mid Quadrant 14

15 Point Selection Strategy Midpoint Strategy + Guaranteed convergence + No need to run phase 1 – Lots of extra movement Visualization Demo – Uniform Uniform – Two extreme Two extreme – Mid quadrant Mid quadrant SS B B Server 1 Server 2 15

16 Point Selection Strategy (2) Greedy Strategy – Take the point closer to the target – Move it to the target, if improves the position of the other point does not worsen by more than δ Reduces movement Server 1 Server 2 Takes long time to converge in some casessome Takes long time to converge in some casessome 16

17 DHT-based Location Directory 17

18 DHT + Midpoint Greedy + fallback to DHT: – Convergence problem exists for some configurations – Visualization Demo Visualization Demo Solution: – Greedy + fallback to DHT with Midpoint – Demo: uniform, two extreme, mid quadrantuniformtwo extrememid quadrant Alternate approach: – Greedy + fallback to Midpoint – Trade-off: movement cost vs. DHT overhead 18

19 Experimental Evaluation Uniform configuration – Greedy + DHT (Midpoint) – Midpoint – Greedy + Midpoint (No DHT) Effect of varying target zone Effect of failed gossip count Metrics – Amount of space moved – # of gossip rounds – Multiple tablet move 19

20 Uniform Configuration: Results 20

21 Effect of Varying Target Zone Larger target zone = fast convergence, less accuracy 21 Target zone width should depend on the target point value

22 Effect of Failed Gossip Count (Greedy) Large failed gossip count = More time in greedy mode, more unproductive gossip at the end 22

23 One-to-N Gossip Contact a few random nodes – Locked/unlocked mode Pick the most profitable one – Distance from the target is minimized Advantage – Better choices Initial results – Locked mode: may lead to deadlock – Unlocked mode: most of the cases other nodes start transfer 23

24 Move Suppression Two global stages Stage 1: – One-to-One gossip, but moves are hypothetical Stage 2: – Change to chosen placement Advantage – Tablet not moved multiple times Challenges – When to switch to Stage 2 from Stage 1 24

25 Future Works Handling initial placement Frequency of running the placement algorithm Considering the network hierarchy Handling failures Extending to heterogeneous resources Questions? 25


Download ppt "Distributed Load Balancing for Key-Value Storage Systems Imranul Hoque Michael Spreitzer Malgorzata Steinder."

Similar presentations


Ads by Google