Presentation is loading. Please wait.

Presentation is loading. Please wait.

Load Balancing How? –Partition the computation into units of work (tasks or jobs) –Assign tasks to different processors Load Balancing Categories –Static.

Similar presentations


Presentation on theme: "Load Balancing How? –Partition the computation into units of work (tasks or jobs) –Assign tasks to different processors Load Balancing Categories –Static."— Presentation transcript:

1 Load Balancing How? –Partition the computation into units of work (tasks or jobs) –Assign tasks to different processors Load Balancing Categories –Static (load assigned before application runs) –Dynamic (load assigned as applications run) oCentralized (Tasks assigned by the master or root process) oDe-centralized (Tasks reassigned among slaves) –Semi-dynamic (application periodically suspended and load balanced) Load Balancing Algorithms are: –Adaptive if they adapt to system load levels using thresholds –Stable if load balancing traffic is independent of load levels –Symmetric if both senders and receivers initiate action –Effective if load balancing overhead is minimal Definition: A load is balanced if no processes are idle Note: Load balancing is an NP-Complete problem

2 Improving the Load Balance By realigning processing work, we improve speed-up

3 Static Load Balancing Round Robin –Tasks assigned sequentially to processors –If tasks > processors, the allocation wraps around Randomized: Tasks are assigned randomly to processors Partitioning – Tasks are represented by a graph –Recursive Bisection –Simulated Annealing –Genetic Algorithms –Multi-level Contraction and Refinement Advantages –Simple to implement –Minimal run time overhead Disadvantages –Predicting execution times is often not knowable –Affect of communication dynamics is often ignored –The number of iterations required by processors to converge on a solution is often indeterminate Done prior to executing the parallel application Note: The Random algorithm is a popular benchmark for comparison

4 The Nodes represent tasks The Edges represent communication cost The Node values represent processing cost A second node value could represent reassignment cost A Load Balancing Partitioning Graph

5 Simulated Annealing Overview –A heuristic used to address many NP-Complete problems, which is more efficient than exhaustively searching graphs –Metallurgy: heating and cooling causes some atoms to temporarily randomly move through higher energy states; settling in a more stable state Algorithm WHILE balance is not good enough FOR each node in the graph Compute its energy (computation/communication requirements) IF probabilistic calculation indicates that node should move Pick a set of neighbors assigned to other processors FOR each neighbor in the set Compute impact of reassigning the node Move the node to the best neighbor partition

6 Simplifying the Problem Recursive Bisection Overview: Dividing the graph in two is an easier problem than tackling the entire problem Algorithm IF graph is too large –Split the graph in two, minimizing computation and communication requirements –Color each side –Recursively continue splitting Multi-level Approaches Overview: Similar to Recursive Bisection, but refines the approach Contraction phase shrinks the graph, Refinement phase restores it to its original size Algorithm –WHILE graph is too large Find bisection point and contract the graph –WHILE graph is smaller contracted, refine the graph, and reassign nodes if improvements are possible

7 Dynamic Load Balancing Centralized –A single process hands out tasks –Processes ask for more work when their processing completes –Double buffering (ask for more while still working) can be effective Decentralized –Processes detect that their work load is low –Processes sense an overload condition When new tasks are spawned during execution When a sudden increase in task load occurs –Questions Which neighbors should participate in the rebalancing? How should the adaptive thresholds be set? What are the communications needed to balance? How often should balancing occur? Done as a parallel application executes

8 Centralized Load Balancing Master Processor: Maintains the work pool (queue, heap, etc.) While ( task=Remove()) != null) Receive(p i, request_msg) Send(p i, task) While(more processes) Receive(p i, request_msg) Send(p i, termination_msg) Slave Processor: Perform task and then ask for another task = Receive(p master, message) While (task!=terminate) Process task Send(p master, request_msg) task = Receive(p master, message) Work Pool, Processer Farm, or Replicated Worker Algorithm Slaves Master In this case, the slaves do not spawn new tasks How would the pseudo code change if they did?

9 Centralized Termination Necessary Requirements –The task queue is empty –Every process is waiting for a new task Master Processor WHILE (true) Receive(p i, msg) IF msg contains a new task Add the new task to the task queue ELSE Add p i to wait queue and waitCount++ IF waitCount>0 and task queue not empty Remove p i & task respectively from wait & task queue Send(task, p i ) and waitCount—- IF waitCount==P THEN send termination messages & exit How do we terminate when slave processes spawn new tasks?

10 Decentralized Load Balancing There is no Master Processor Each Processor maintains a work queue Processors interact with neighbors to request and distribute tasks (Worker processes interact among themselves)

11 Decentralized Mechanisms Receiver Initiated –Process requests tasks when it is about to go idle –Effective when the load is heavy –Unstable when the load is light (A request frequency threshold is necessary) Sender Initiated –Process with a heavy load distributes the excess –Effective when the load is heavy –Can cause thrashing when loads are heavy (synchronizing system load with neighbors is necessary) Balancing is among a subset of the total running processes Application Balancing Algorithm Task Queue

12 Process Selection Global or Local? –Global involves all of the processors of the network May require expensive global synchronization May be difficult if the load dynamic is rapidly changing –Local involves only neighbor processes Overall load may not be balanced Easier to manage and less overhead than the global approach Neighbor selection algorithms –Random: randomly choose another process Easy to implement and studies show reasonable results –Round Robin: Select among neighbors using modular arithmetic Easy to implement. Results similar to random selection –Adaptive Contracting: Issue bids to neighbors; best bid wins Handshake between neighbors needed It is possible to synchronize loads

13 Choosing Thresholds How do we estimate system load? –Synchronization averages task queue length or processes –Average number of tasks or projected execution time When is the load low? –When a process is about to go idle –Goal: prevent idleness, not achieve perfect balance –A low threshold constant is sufficient When is the load high? –When some processes have many tasks and others are idle –Goal: prevent thrashing –Synchronization among processors is necessary –An exponentially growing threshold works well What is the job request frequency? –Goal: minimize load balancing overhead

14 Gradient Algorithm Node Data Structures –For each neighbor Distance, in hops, to the nearest lightly-loaded process –A load status flag indicating if the current processor is lightly- loaded, or normal Routing –Spawned jobs go to the nearest lightly-loaded process Local Synchronization –Node status changes are multicast to its neighbors L 2 12 211 22 Maintains a global pressure grid

15 Symmetric Broadcast Networks (SBN) Characteristics –A unique SBN starts at each node –Each SBN is lg P deep –Simple operations algebraically compute successors –Easily adapts to the hypercube Algorithm –Starts with a lightly loaded process –Phase 1: SBN Broadcast –Phase 2: Gather task queue lengths –Load is balanced during the load and gather phases 1 3 42 7 60 Global Synchronization Stage 0 Stage 1 Stage 2 5 Stage 3 Successor 1 = (p+2 s-1 ) %P; 1≤s ≤ 3 Successor 2 = (p-2 s-1 ); 1≤s<3 Note: If successor 2<0 successor2 +=P

16 Hypercube Based SBN Four Dimension HypercubeSBN Hypercube Spanning Tree Stage = number of bits set Successors: exclusive or with zero bits to the left, or if none with the first leftmost unset bit Single hop required between adjacent nodes Load balance requires 2 lg(P) communications

17 Find Successor Ranks for (int rank=0; rank<P-1; rank++) {int stage = 0, n = rank; // Compute the number of bits set while (n!=0) { n = n & (n-1); stage++; } System.out.printf("Successors rank %d stage %d = ", rank, stage); if (rank>=P/2) // No unset bits to the left { mask = (1<<(dimension-stage))-1; successor = ((rank & ~mask)>>1) | rank; System.out.println(successor); } else// Successors are all ranks with unset bits to the left { for (int bit = 1 rank; bit/=2) { successor = rank | bit; System.out.print(successor + " "); } System.out.println(); } }}

18 SBN Pattern Rank 0 Successors rank 0 stage 0 = 8 4 2 1 Successors rank 1 stage 1 = 9 5 3 Successors rank 2 stage 1 = 10 6 Successors rank 3 stage 2 = 11 7 Successors rank 4 stage 1 = 12 Successors rank 5 stage 2 = 13 Successors rank 6 stage 2 = 14 Successors rank 7 stage 3 = 15 Successors rank 8 stage 1 = 12 Successors rank 9 stage 2 = 13 Successors rank 10 stage 2 = 14 Successors rank 11 stage 3 = 15 Successors rank 12 stage 2 = 14 Successors rank 13 stage 3 = 15 Successors rank 14 stage 3 = 15 For other ranks, simply exclusive or the desired source rank with the above pattern For example: Consider the source rank of 5 –The stage 2 processors are 3^5 (6), 5^5 (0), 6^5 (3), 9^5 (12), 10^5 (15), and 12^5 (9) –The successors to 3^5 (6) are 11^5 (14) and 7^5 (2)

19 Line Balancing Algorithm Master or slave processors adjust pipeline Slave processors –Request and receives tasks if queue not full –Pass tasks on if task request is posted Non blocking receives are necessary to implement this algorithm Uses a pipeline approach Request task if queue not full Receive task from request Deliver task to p i+1 p i+1 requests task Dequeue and process task pipi Note: This algorithm easily extends to a tree topology

20 Semi-dynamic Pseudo code Run algorithm Time to check balance? Suspend application IF load is balanced, resume application Re-partition the load Distribute data structures among processors Resume execution Partitioning –Model application execution by a partitioning graph –Partitioning is an NP-Complete problem –Goals: Balance processing and minimize communication and relocation costs –Partitioning Heuristics Recursive Bisection, Simulated Annealing, Multi-level, MinEx

21 Partitioning Graph P2 R1 P5 R3 P8 R3 P4 R1 P6 R6 P2 R1 P9 R6 P4 R4 P7 R5 P1 P2 c4 c6 c2 c1 c7 c1 c3 c8 c5 c3 P1 Load = (9+4+7+2) + (4+3+1+7) = 37 P2 Load = (6+2+4+8+5) + (4+3+1+7) = 40 Question: When can we move a task to improve load balance?

22 Distributed Termination Insufficient condition for distributed termination –Empty task queues at every process Sufficient condition for distributed termination requires –All local termination conditions satisfied –No messages in transit that could restart an inactive process Termination algorithms –Acknowledgment –Ring –Tree –Fixed energy distribution

23 Acknowledgement Termination Process Receives task –Immediately acknowledge if source is not parent –Acknowledge parent as process goes idle Process goes idle after it –completes processing local tasks –Sends all acknowledgments –Receives all pending acknowledgments Notes –The process sending an initial task that activates another process becomes that process's parent –A process always goes inactive before its parent –If the master goes inactive, termination occurs Active Inactive First task Acknowledge first task PiPi PjPj

24 Single Pass Ring Termination Pseudo code P 0 sends a token to P 1 when it goes idle P i receives token IF P i is idle it passes token to P i+1 ELSE P i sends token to P i+1 when it goes idle P 0 receives token Broadcast final termination message Assumptions –Processes cannot reactivate after going idle –Processes cannot pass new tasks to an idle process P0P0 P1P1 P2P2 PnPn Token

25 Dual Pass Ring Termination Pseudo code (Only idle processors send tokens) WHEN P 0 goes idle and has token, it sends white token to p 1 IF P i sends a task to p j where j<i P i becomes a black process WHEN P i>0 receives token and goes idle IF P i is a black process P i colors the token black, P i becomes White ELSE P i sends token to p (i+1) %P unchanged in color IF P 0 receives token and is idle IF token is White, application terminates ELSE p o sends a White token to P 1 Handles task sent to a process that already passed the token on Key Point: Processors pass either Black or White tokens on only if they are idle Process: white=ready for termination, black: sent a task to P j-x Token: white=ready for termination, black=communication possible

26 Tree Termination If a Leaf process terminates, it sends a token to it’s parent process Internal nodes send tokens to their parent when all of their child processes terminate If the root node receives the token, the application can terminate AND Leaf Nodes Terminated

27 Fixed Energy Termination P 0 starts with full energy –When P i receives a task, it also receives an energy allocation –When P i spawns tasks, it assigns them to processors with additional energy allocations within its allocation –When a process completes it returns its energy allotment The application terminates when the master becomes idle Implementation –Problem: Integer division eventually becomes zero –Solution: oUse two level energy allocation oThe generation increases each time energy value goes to zero Energy defined by an integer or long value

28 Example: Shortest Path Problem Definitions Graph: Collection of nodes (vertices) and edges Directed Graph: Edge can be traversed in only one direction Weighted Graph: Edges have weights that define cost Shortest Path Problem: Find the path from one node to another in a weighted graph that has the smallest accumulated weights Applications 1.Shortest distance between points on a map 2.Quickest travel route 3.Least expensive flight path 4.Network routing 5.Efficient manufacturing design

29 Climbing a Mountain Weights: expended effort Directed graph –Effort in one direction ≠ effort in another direction –Ex: Downhill versus uphill ABCDEF A10 B8132451 C14 D9 E17 F A BC D E F 10 8 13 2451 14 9 17 Adjacency Matrix C8 D14X E9X F17X X B10X D13 E24 F51 A B C D E F Adjacency List Graphic Representation

30 Moore’s Algorithm Assume –w[i][j] = weight of edge (i,j) –Dist[v] = distance to vertex v –Pred[v] = predecessor to vertex v Pseudo code Insert the source vertex into a queue For each vertex, v, dist[v]=∞ infinity, dist[0] = 0 WHILE (v = dequeue() exists) FOR (j=; j<n; j++) newdist = dist[i] + w[i][j] IF (newdist < dist[j]) dist[j] = newdist pred[j] = I append(j) Less efficient than Dijkstra but more easily parallelized ij didi w i,j djdj d j =min(d j,d i +w i,j )

31 Centralized Work Pool Solution The Master maintains –The work pool queue of unchecked vertices –The distance array Every slave holds: The graph weights which is static The Slaves –Request a vertex –Compute new minimums –Send updated distance values and vertex to master The Master –Appends received vertices to its work queue –Sends new vertex and the updated distance array.

32 Distributed Work Pool Solution Data held in each processor –The graph weights –The distances to vertices stored locally –The processor assignments When a process receiving a distance: –If its local value is reduced oUpdates its local value of dist[v] oSend distances to adjacent vertices to appropriate processors Notes –The load can be very imbalanced –One of the termination algorithms is necessary


Download ppt "Load Balancing How? –Partition the computation into units of work (tasks or jobs) –Assign tasks to different processors Load Balancing Categories –Static."

Similar presentations


Ads by Google