Download presentation
Presentation is loading. Please wait.
1
Scalable Load-Distance Balancing
EE Department The Technion, Haifa, Israel Scalable Load-Distance Balancing Edward Bortnikov, Israel Cidon, Idit Keidar
2
Service Assignment Assign (many) users to (a few) servers
Service Assignment Assign (many) users to (a few) servers Applications: Content/game servers Internet gateways in a wireless mesh network (WMN) The increased demand for real-time access multiple service points (servers) Gives rise to the problem of service assignment – associating each user with a server s.t. the QoS is improved Many technologies
3
Load-Distance Balancing (LDB)
Load-Distance Balancing (LDB) Two sources of service delay Network delay – due to user-server distance e.g., depends on the number of network hops Congestion delay – due to server load General monotonic non-decreasing function Total delay = Network delay + Congestion delay The Load-Distance Balancing problem (LDB) Minimize the maximum total delay (cost) NP-complete (a 2-approximation exists)
4
LDB versus LB Network distance OK Congestion high Network distance OK
LDB versus LB Network distance OK Congestion high Network distance OK Congestion OK Network distance high Congestion OK
5
Distributed LDB Distributed assignment computation Requirements
Distributed LDB Distributed assignment computation Initially, users report locations to closest servers Servers communicate and compute the assignment Synchronous failure-free communication model Requirements Eventual quiescence Eventual stability of assignment Constant α-approximation of the optimal cost α ≥ 2 is a parameter (trade communication/time for cost) At startup, every user is assigned to the nearest server Servers can communicate and change assignment Eventually, the following conditions must hold: inter-server communication stops Assignment stops changing An a-approximation of the optimal assignment is computed….
6
What About Locality? Extreme global solution Extreme local solution
What About Locality? Extreme global solution Collect all data and compute assignment centrally Guarantees 2-approximation of optimal cost Excessive communication/network latency Extreme local solution Nearest-Server assignment No communication No approximation guarantee (can’t handle crowds) In addition to cost, in the distributed case we are interested about locality.
7
Workload-Sensitive Locality
Workload-Sensitive Locality The cost function is distance-sensitive Most assignments can go to the near servers … except for dissipating congestion peaks Distributed solution structure Start from the nearest-server assignment Offload congestion to near servers Workload-sensitive locality Go as far as needed … … to achieve the desired approximation α
8
Example: Light Uniform Workload
Example: Light Uniform Workload
9
Example: Peaky Workload
Example: Peaky Workload LDB-approximation LDB-approximation LDB-approximation
10
Iterative Clustering Partition servers into clusters
Iterative Clustering Partition servers into clusters Assign each user within its cluster Choose one leader per cluster Leader collects local data Computes within-cluster assignments Clusters may merge A cluster tries to merge as long as its cost is ε-improvable Can be decreased by ≥ 1+ε if all servers are harnessed No ε-improvable cluster desired approximation achieved (α =2(1+ε)) slack factor
11
Tree (Structured) Clustering
Tree (Structured) Clustering Maintain a hierarchy of clusters Employ clusters of size 2i While some cluster ε-improvable Double it (merge with hierarchy neighbor) Simple, O(log N) convergence Requires hierarchy maintenance May not adapt well Miss cross-boundary optimization
12
Ripple (Unstructured) Clustering
Ripple (Unstructured) Clustering Define linear order among servers While cluster improvable Merge with smaller-cost L/R neighbor Adaptive merging Conflicts possible A BC, ABC Randomized tie-breaking to resolve Many race conditions (we love it )
13
Ripple Example: Merging Without Conflicts
Ripple Example: Merging Without Conflicts Propose merge High cost, improvable Low cost Accept proposal
14
Example Ripple Conflict Resolution ABC
Example Ripple Conflict Resolution ABC Propose Propose Accept
15
Ripple’s Properties Near-Optimal Cost Convergence Locality
Ripple’s Properties Near-Optimal Cost α-approximation of the optimal cost Convergence Communication quiescence + stability of assignment N rounds in the worst case (despite coin-tossing) Much faster in practice Locality For isolated load peaks, final clusters are at most twice as large as minimum required to obtain cost
16
Sensitivity to Required Approximation: Cost
Sensitivity to Required Approximation: Cost Urban WMN 64 servers (grid) Internet gateways 12800 users Mixed distribution (50%uniform / % peaky) 2-approximation algorithm applied within each cluster Nearest Server Theoretical Worst-Case Bound Tree Ripple Tree and Ripple outperform NS Beyond upper bound Ripple s better α =2(1+ε)
17
Sensitivity to Required Approximation: Locality
Sensitivity to Required Approximation: Locality Urban WMN 64 servers Internet gateways 12800 users Mixed distribution (50%uniform / % peaky) Ripple: max cluster size Most clusters built by Ripple are smaller Ripple: avg cluster size
18
Scalability with Network Size: Cost
Scalability with Network Size: Cost Urban WMN 64 to 1024 servers 12800 to users Nearest Server Tree Ripple
19
Scalability with Network Size: Locality
Scalability with Network Size: Locality Urban WMN 64 to 1024 servers 12800 to users Tree Ripple
20
Summary LD Balancing: novel optimization problem
Summary LD Balancing: novel optimization problem Distributed workload-sensitive solutions Tree and Ripple algorithms Ripple is more complex but performs better No infrastructure Scales better in practice Achieves lower costs in practice
21
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.