Download presentation
Presentation is loading. Please wait.
1
Efficient, Proximity-Aware Load Balancing for DHT-Based P2P Systems Yingwu Zhu, Yiming Hu Appeared on IEEE Trans. on Parallel and Distributed Systems, vol. 16, no. 4, April 2005 Presented by Ki
2
2 Outline Introduction System Design Proximity-Aware Load Balancing Experimental Evaluations Conclusions
3
3 Introduction DHT-based P2P systems like Chord, Pastry, Tapestry, CAN, … Provide a distributed hash table abstraction for object storage and retrieval Assume nodes are homogeneous Two main drawbacks Imbalance load Node heterogeneity
4
4 Introduction Existing solutions for DHT load balancing Some ignore node heterogeneity Some ignore proximity information This work proposes a proximity-aware load balancing scheme which considered the above two aspects
5
5 System Design - Basic Virtual Server (VS) Act like an autonomous peer Responsible for a contiguous portion of the DHT’s identifier space Responsible for certain amount of load Physical Peer Can host multiple virtual servers Heavily loaded physical peer Moves some of its virtual servers to other lightly loaded physical peers to achieve load balancing Movement of virtual server can be viewed as a leave operation followed by a join operation
6
6 System Design – Four Phrases The load balancing scheme proceed in 4 phrases Load balancing information (LBI) aggregation Collect load and capacity information of whole system Node classification Classify nodes into HEAVY, LIGHT, NEUTRAL Virtual server assignment (VSA) Determine the VS assignment to make HEAVY nodes becomes LIGHT Proximity-aware Virtual server transferring (VST)
7
7 System Design – k-ary Tree on DHT A k-ary tree (KT) is built on top of the DHT Occupying the same identifier space KT root node is responsible for the entire identifier space Each child node is responsible for a portion of their parent’s identifier space
8
8 System Design – k-ary Tree on DHT A KT node is responsible for identifier space region key & host is obtained by procedure plant_KT_node() Keeps track of its parent and children KT node, X, is planted into a virtual server which responsible for X.key Example (KT Node X, VS S): X.region = (3,5] X.key = 4 S’s region = (3, 6] X is planted in S
9
9 System Design – k-ary Tree on DHT For KT node X, its region is further divided into k parts, then taken by its k children Until… X’s region is completely covered by its hosting VS Each KT node periodically check if its region is completely covered by its VS Yes delete the existing children No keep k children
10
10 Load Balancing Information Aggregation Load Balancing Information (LBI) of node i L i total load of VS in node i C i capacity of node i L i,min minimum load of VS in node i X.host randomly responds to one of its VS only
11
11 Load Balancing Information Aggregation KT root node obtains the system-wide LBI L Total load C Total capacity L min minimum load of all VS in system KT root node distribute the system-wide LBI Along the tree, back to the leaf nodes, VS and finally the DHT node
12
12 Node Classification System-wide utilization = L / C Utilization of node i = L i / C i Define T i = (L / C + ε) * C i ε is a parameter for trade off between amount of load movement and quality of load balance Classification HEAVY node if L i > T i LIGHT node if (T i – L i ) >= L min NEUTRAL node if 0 <= (T i – L i ) < L min
13
13 Virtual Server Assignment Each HEAVY DHT node i Randomly choose a subset of its VS that minimizes s.t. Minimized the amount of load movement VSA info.: … Each LIGHT DHT node j VSA info.: This VSA information propagates upward along the KT
14
14 Virtual Server Assignment Proximity Ignorant Each KT node i Collects the VSA information until the a pairing_threshold Uses a best-fit heuristic to reassign VS Reassign VS in HEAVY nodes to LIGHT nodes And minimize load movement DHT nodes of reassigned VS get notified while the rest of VSA information propagate to i’s parent
15
15 Virtual Server Assignment Proximity Aware Use landmark clustering Measure distance to a number of landmark nodes Obtain a landmark vector which represent point in a m-dimensional space Nodes with close landmark vectors are in general physically close Transform the landmark m-dimensional space to the DHT identifier space and obtains a DHT key, LM i By Hilbert curve, i.e. N m N Proximity preserving
16
16 Virtual Server Assignment Proximity Aware Each node i independently determines its landmark vector and its corresponding DHT key, LM i Node publish its VSA information in the DHT network with the DHT key LM i Node j that responsible for the region contains LM i receive the VSA information Node j propagate the received VSA information into the KT
17
17 Virtual Server Transferring Upon receiving the reassigned VSA info. HEAVY node transfer the reassigned VS to the LIGHT node The transfer of VS would cause KT to reconstruct Lazy migration Reconstruction of KT only after the completion of all transfers
18
18 Experimental Evaluations Load Balancing Information aggregation and virtual server assignment latency
19
19 Experimental Evaluations Underlying network Generated by GT-ITM with about 5000 nodes Underlying DHT overlay Chord with 4096 nodes 5 virtual servers in each node, exponential identifier space k-ary tree k = 2 Pairing threshold 50 Landmark node count 15
20
20 Experimental Evaluations Nodes carry loads proportional to their capacities by reassigning virtual servers
21
21 Experimental Evaluations Cumulative distribution of moved load Proximity-aware 36% within 2 hops, 57% within 10 hops Proximity-ignorant 17% within 10 hops Proximity-aware Reduce load balancing cost Fast and efficient load balancing
22
22 Experimental Evaluations Effect of node churn (join & leave) Overhead = (M d – M s ) / M s M d : number of VSA messages with node churn M s : number of VSA messages without node churn
23
23 Conclusions This work focuses on an efficient, proximity-aware load balancing scheme Align load distribution and node capacity Use proximity information to guide load reassignment and transferring
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.