Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Charm++ Tutorial Parallel Programming Laboratory, UIUC.

Similar presentations


Presentation on theme: "1 Charm++ Tutorial Parallel Programming Laboratory, UIUC."— Presentation transcript:

1 1 Charm++ Tutorial Parallel Programming Laboratory, UIUC

2 2 Overview Introduction –Virtualization –Data Driven Execution in Charm++ –Object-based Parallelization Charm++ features with simple examples –Chares and Chare Arrays –Parameter Marshalling –Structured Dagger Construct –Load Balancing –Tools –Projections –LiveViz

3 3 Technical Approach Specialization Automation Decomposition done by programmer, everything else automated Seek optimal division of labor between “system” and programmer Scheduling Mapping Decomposition Charm++

4 4 Object - based Parallelization User View System implementation User is only concerned with interaction between objects

5 5 Virtualization: Object-based Decomposition Divide the computation into a large number of pieces –Independent of number of processors –Typically larger than number of processors Let the system map objects to processors

6 6 Chares – Concurrent Objects Can be dynamically created on any available processor Can be accessed from remote processors Send messages to each other asynchronously Contain “entry methods”

7 7.ci file mainmodule hello { mainchare mymain { entry mymain(); }; #include “hello.decl.h” class mymain : public CBase_mymain{ public: mymain(int argc, char **argv) { ckout <<“Hello World” <<endl; CkExit(); } }; #include “hello.def.h”.C file “Hello World!” Generates hello.decl.h hello.def.h

8 8 Compile and run the program Compiling charmc -o, -g, -language, -module, -tracemode pgm: pgm.ci pgm.h pgm.C charmc pgm.ci charmc pgm.c charmc –o pgm pgm.o –language charm++ To run a CHARM++ program named ``pgm'' on four processors, type: charmrun pgm +p4 Nodelist file (for network architecture) list of machines to run the program host

9 9 Data Driven Execution in Charm++ Scheduler Message Q Scheduler Message Q Objects x y CkExit() y->f() ??

10 10 Charm++ solution: proxy classes Proxy class generated for each chare class –For instance, CProxy_Y is the proxy class generated for chare class Y. –Proxy objects know where the real object is –Methods invoked on this object simply put the data in an “envelope” and send it out to the destination Given a proxy p, you can invoke methods –p.method(msg);

11 11 Ring program Array of Objects of the same kind Each one communicates with the next one Individual chares – cumbersome and not practical A collection of chares, –with a single global name for the collection –each member addressed by an index –Mapping of element objects to processors handled by the system

12 12 Chare Arrays A[1]A[0] System view A[1]A[0] A[1]A[2]A[3]A[..] User’s view

13 13 mainmodule m { readonly CProxy_mymain mainProxy; mainchare mymain{ …. } array [1D] Hello { entry Hello(void); entry void sayHi(int HiNo); }; (.ci) file int nElements=4; mainProxy = thisProxy; CProxy_Hello p = CProxy_Hello::ckNew(nElements); //Have element 0 say “hi” p[0].sayHi(12345); In mymain:: mymain() Array Hello p.SayHi(…) class Hello : public CBase_Hello { public: Hello(CkMigrateMessage *m){} Hello(); void sayHi(int hiNo); }; Class Declaration

14 14 void Hello::sayHi(int hiNo) { ckout << hiNo <<"from element" << thisIndex << endl; if (thisIndex < nElements-1) //Pass the hello on: thisProxy[thisIndex+1].sayHi(hiNo+1); else //We've been around once-- we're done. mainProxy.done(); } Array Hello Read-only Element index Array Proxy void mymain::done(void){ CkExit(); }

15 15 Sorting numbers Sort n integers in increasing order. Create n chares, each keeping one number. In every odd iteration chares numbered 2i swaps with chare 2i+1 if required. In every even iteration chares 2i swaps with chare 2i-1 if required. After each iteration all chares report to the mainchare. After everybody reports mainchares signals next iteration. Sorting completes in n iterations. Even round: Odd round:

16 16 mainmodule sort{ readonly CProxy_myMain mainProxy; readonly int nElements; mainchare myMain { entry myMain(CkArgMsg *m); entry void swapdone(void); }; array [1D] sort{ entry sort(void); entry void setValue(int myvalue); entry void swap(int round_no); entry void swapReceive(int from_index, int value); }; Array Sort class sort : public CBase_sort{ private: int myValue; public: sort() ; sort(CkMigrateMessage *m); void setValue(int number); void swap(int round_no); void swapReceive(int from_index, int value); }; swapcount=0; roundsDone=0; mainProxy = thishandle; CProxy_sort arr = CProxy_sort::ckNew(nElements); for(int i=0;i<nElements;i++) arr[i].setValue(rand()); arr.swap(0); sort.ci sort.h myMain::myMain()

17 17 void sort::swap(int roundno) { bool sendright=false; if (roundno%2==0 && thisIndex%2==0|| roundno%2==1 && thisIndex%2==1) sendright=true; //sendright is true if I have to send to right if((sendright && thisIndex==nElements-1) || (!sendright && thisIndex==0)) mainProxy.swapdone(); else{ if(sendright) thisProxy[thisIndex+1].swapReceive(thisIndex, myValue); else thisProxy[thisIndex-1].swapReceive(thisIndex, myValue); } Array Sort(contd..) void sort::swapReceive(int from_index, int value) { if(from_index==thisIndex-1 && value>myValue) myValue=value; if(from_index==thisIndex+1 && value<myValue) myValue=value; mainProxy.swapdone(); } void myMain::swapdone(void){ if (++swapcount==nElements){ swapcount=0; roundsDone++; if (roundsDone==nElements) CkExit(); else arr.swap(roundsDone); } Error! !

18 18 Remember : Message passing is asynchronous. Messages can be delivered out of order. 3 2 3 swap swapReceive 2 is lost!

19 19 void sort::swap(int roundno) { bool sendright=false; if (roundno%2==0 && thisIndex%2==0|| roundno%2==1 && thisIndex%2==1) sendright=true; //sendright is true if I have to send to right if((sendright && thisIndex==nElements-1) || (!sendright && thisIndex==0)) mainProxy.swapdone(); else{ if(sendright) thisProxy[thisIndex+1].swapReceive(thisIndex, myValue); } Array Sort(correct) void sort::swapReceive(int from_index, int value) { if(from_index==thisIndex-1){ if(value>myValue){ thisProxy[thisIndex-1].swapReceive(thisIndex, myValue); myValue=value; } else thisProxy[thisIndex-1].swapReceive(thisIndex, value); } if(from_index==thisIndex+1) myValue=value; mainProxy.swapdone(); } void myMain::swapdone(void){ if (++swapcount==nElements){ swapcount=0; roundsDone++; if (roundsDone==nElements) CkExit(); else arr.swap(roundsDone); }

20 20 Basic Entities in Charm++ Programs Sequential Objects - ordinary sequential C++ code and objects Chares - concurrent objects Chare Arrays - an indexed collection of chares

21 21 Illustrative example: Jacobi 1D Hot temperature on two sides will slowly spread across the entire grid.

22 22 Illustrative example: Jacobi 1D Input: 2D array of values with boundary conditions In each iteration, each array element is computed as the average of itself and its neighbors Iterations are repeated till some threshold error value is reached

23 23 Jacobi 1D: Parallel Solution!

24 24 Jacobi 1D: Parallel Solution! Slice up the 2D array into sets of columns Chare = computations in one set At the end of each iteration –Chares exchange boundaries –Determine maximum change in computation Output result when threshold is reached

25 25 Arrays as Parameters Array cannot be passed as pointer specify the length of the array in the interface file – entry void bar(int n,double arr[n])

26 26 Jacobi Code void Ar1::doWork(int sendersID, int n, double arr[n]) { maxChange = 0.0; if (sendersID == thisIndex-1) { leftmsg = 1; // set boolean to indicate we received the left message } else if (sendersID == thisIndex+1) { rightmsg = 1; // set boolean to indicate we received the right message } // Rest of the code on the next slide … }

27 27 Reduction Apply a single operation (add, max, min,...) to data items scattered across many processors Collect the result in one place Reduce x across all elements –contribute(sizeof(x), &x, CkReduction::sum_int,processResult ); – Function “processResult()” All contribute calls from one array must name the same function

28 28 Callbacks A generic way to transfer control back to a client after a library has finished. After finishing a reduction, the results have to be passed to some chare's entry method. To do this, create an object of type CkCallback with chare's ID & entry method index Different types of callbacks One commonly used type: CkCallback cb(, );

29 29 void Ar1::doWork(int sendersID, int n, double arr[n]) { //Code on previous slide … if (((rightmsg == 1) && (leftmsg == 1)) || ((thisIndex == 0) && (rightmsg == 1)) || ((thisIndex ==K-1) && (leftmsg == 1))) { // Both messages have been received and we can now compute the new values of the matrix … // Use a reduction to find determine if all of the maximum errors on each processor had a maximum change that is below our threshold value. CkCallback cb(CkIndex_Ar1::doCheck(NULL),a1); contribute(sizeof(double), &maxChange, CkReduction::max_double, cb); } } Jacobi Code Continued

30 30 Types of Reductions Predefined Reductions – A number of reductions are predefined, including ones that –Sum values or arrays –Calculate the product of values or arrays –Calculate the maximum contributed value –Calculate the minimum contributed value –Calculate the logical and of integer values –Calculate the logical or of contributed integer values –Form a set of all contributed values –Concatenate bytes of all contributed values Plus, you can create your own

31 31 Structured Dagger What is it? –A coordination language built on top of Charm++ Motivation: –To reduce the complexity of program development without adding any overhead

32 32 Structured Dagger Constructs atomic {code} – Specifies that no structured dagger constructs appear inside of the code so it executes atomically. overlap {code} – Enables all of its component constructs concurrently and can execute these constructs in any order. when {code} – Specifies dependencies between computation and message arrival.

33 33 Structure Dagger Constructs Continued if / else/ while / for – These are the same as their C++ conterparts, except that they can contain when blocks in their respective code segments. Hence execution can be suspended while they wait for messages. forall – Functions like a for statement, but enables its component constructs for its entire iteration space at once. As a result it doesn’t need to execute its iteration space in strict sequence.

34 34 Jacobi Example Using Structured Dagger jacobi.ci array[1D] Ar1 { … entry void GetMessages (MyMsg *msg) { when rightmsgEntry(MyMsg *right), leftmsgEntry(MyMsg *left) { atomic { CkPrintf(“Got both left and right messages \n”); doWork(right, left); } } }; entry void rightmsgEntry(MyMsg *m); entry void leftmsgEntry(MyMsg *m); … }; In a 1D jacobi that doesn’t use structured dagger the code in the.C file for the doWork function is much more complex. The code needs to manually check if both messages have been received by using if/else statements. By using structured dagger, the doWork function will not be called until both messages have been received. The compiler will translate the structured dagger code into code that will do the appropriate checks, hence making the programmers job simpler.

35 35 Another Example of Structured Dagger : LeanMD LeanMD is a molecular dynamics simulation application written in Charm++ and Structured Dagger. Here, at every timestep, each cell sends its atom positions to its neighbors.Then it receives forces and integrates the information to calculate new positions.

36 36 Another Example of Structured Dagger : LeanMD for (timeStep_=1; timeStep_<=NumSteps; timeStep_++) { atomic { sendAtomPos(); }//send atom positions to all neighbors for (forceMsgCount_=0; forceMsgCount_<numForceMsg_; forceMsgCount_++) { when recvForces(ForcesMsg* msg) { atomic { procForces(msg); } } } atomic { doIntegration(); } if (SimParam.DoMigrate && timeStep_ % SimParam.MigrateFreq == 0) { atomic { migrate(); } for (migrateMsgCount_=0;migrateMsgCount_<numMigrateMsg_;migrateMsgCount_++) { when recvMigrateMsg(MigrateMsg* msg) { atomic { procMigrateMsg(msg); } } } atomic { collectAtoms(); } } } This a code sample from the leanMD program. It underlines the importance of using Structured Dagger.Here the code is highly simplified due to the use of Structured Dagger constructs

37 37 Load Balancing Projections Object Migration Load Balancing Strategies Using Load Balancing

38 38 Projections: Quick Introduction Projections is a tool used to analyze the performance of your application The tracemode option is used when you build your application to enable tracing You get one log file per processor, plus a separate file with global information These files are read by Projections so you can use the Projections views to analyze performance

39 39 Screen shots – Load imbalance Jacobi 2048 X 2048 Threshold 0.1 Chares 32 Processors 4

40 40 Timelines – load imbalance

41 41 Migration Array objects can migrate from one PE to another To migrate, must implement pack/unpack or pup method Need this because migration creates a new object on the destination processor while destroying the original pup combines 3 functions into one –Data structure traversal : compute message size, in bytes –Pack : write object into message –Unpack : read object out of message Basic Contract : here are my fields (types, sizes and a pointer)

42 42 Pup – How to write it? Class ShowPup { double a; int x; char y; unsigned long z; float q[3]; int *r; // heap allocated memory public: … other methods … void pup(PUP::er &p) { p | a;// notice that you can either use | operator p | x; p | y; p(z);// or () p(q,3); // but you need () for arrays if(p.isUnpacking() ) r = new int[ARRAY_SIZE]; p(r,ARRAY_SIZE); } };

43 43 Load Balancing All you need is a working pup link a LB module –-module –RefineLB, NeighborLB, GreedyCommLB, others… –EveryLB will include all load balancing strategies compile time option (specify default balancer) –-balancer RefineLB runtime option –+balancer RefineLB

44 44 Centralized Load Balancing Uses information about activity on all processors to make load balancing decisions Advantage: since it has the entire object communication graph, it can make the best global decision Disadvantage: Higher communication costs/latency, since this requires information from all running chares

45 45 Neighborhood Load Balancing Load balances among a small set of processors (the neighborhood) to decrease communication costs Advantage: Lower communication costs, since communication is between a smaller subset of processors Disadvantage: Could leave a system which is globally poorly balanced

46 46 Main Centralized Load Balancing Strategies GreedyCommLB – a “greedy” load balancing strategy which uses the process load and communications graph to map the processes with the highest load onto the processors with the lowest load, while trying to keep communicating processes on the same processor RefineLB – move objects off overloaded processors to under-utilized processors to reach average load Others – the manual discusses several other load balancers which are not used as often, but may be useful in some cases; also, more are being developed

47 47 Neighborhood Load Balancing Strategies NeighborLB – neighborhood load balancer, currently uses a neighborhood of 4 processors

48 48 When to Re-balance Load? Programmer Control: AtSync load balancing AtSync method: enable load balancing at specific point –Object ready to migrate –Re-balance if needed –AtSync() called when your chare is ready to be load balanced – load balancing may not start right away –ResumeFromSync() called when load balancing for this chare has finished Default: Load balancer will migrate when needed

49 49 Processor Utilization: After Load Balance

50 50 Timelines: Before and After Load Balancing

51 51 Advanced Features Groups Node Groups Priorities Entry Method Attributes Communications Optimization Checkpoint/Restart

52 52 Advanced Features: Groups With arrays, the standard method of communications between elements is message passing With a large number of chares, this can lead to large numbers of messages in the system For global operations like reductions, this could make the receiving chare a bottleneck Can this be fixed?

53 53 Advanced Features: Groups Solution: Groups Groups are more of a “system level” programming feature of Charm++, versus “user level” arrays Groups are similar to arrays, except only one element is on each processor – the index to access the group is the processor ID Groups can be used to batch messages from chares running on a single processor, which cuts down on the message traffic Disadvantage: Does not allow for effective load balancing, since groups are stationary (they are not virtualized)

54 54 Advanced Features: Node Groups Similar to groups, but only one per node, instead of one per processor – the index is the node number Can be used to solve similar problems as well – with one node group per SMP node, the node group could act as a collection point for messages on the node, lowering message traffic on interconnects between nodes

55 55 Advanced Features: Priorities In general, messages in Charm++ are unordered: but what if an order is needed? Solution: Priorities Messages can be assigned different priorities The simplest priorities just specify that the message should either go on the end of the queue (standard behavior) or the beginning of the queue Specific priorities can also be assigned to messages, using either numbers or bit vectors Note that messages are non-preemptive: a lower priority message will continue processing, even if a higher priority message shows up

56 56 Advanced Features: Entry Method Attributes entry [ attribute1,..., attributeN ] void EntryMethod ( parameters ); Attributes: threaded –entry methods which are run in their own non- preemptible threads sync –methods return message as a result

57 57 Advanced Features: Communications Optimization Used to optimize communication patterns in your application Can use either bracketed strategies or streaming strategies Bracketed strategies are those where a specific start and end point for the communication are flagged Streaming strategies use a preset time interval for bracketing messages

58 58 Advanced Features: Communications Optimization For strategies, you need to specify a communications topology, which specifies the message pattern you will be using When compiling with the communications optimization library, you must include –module commlib

59 59 Advanced Features: Checkpoint/Restart If you have a long running application, it would be nice to be able to save its state, just in case something happens Checkpointing gives you this ability When you checkpoint an application, it uses the migrate code already present for load balancing to store the state of array objects State information is saved in a directory of your choosing

60 60 Advanced Features: Checkpoint/Restart The application can be restarted, with a different number of processors, using the checkpoint information The charmrun option ++restart is used to restart You can also restart groups by marking them migratable and writing a PUP routine – they still will not load balance, though

61 61 Other Advanced Features Custom array indexes Array creation/mapping options Additional load balancers Local versus proxied calls

62 62 Benefits of Virtualization Better Software Engineering –Logical Units decoupled from “Number of processors” Message Driven Execution –Adaptive overlap between computation and communication –Predictability of execution Flexible and dynamic mapping to processors –Flexible mapping on clusters –Change the set of processors for a given job –Automatic Checkpointing Principle of Persistence

63 63 More Information http://charm.cs.uiuc.edu –Manuals –Papers –Download files –FAQs ppl@cs.uiuc.edu

64 64 Other tools: LiveViz

65 65 LiveViz – What is it? Charm++ library Visualization tool Inspect your program’s current state Client runs on any machine (java) You code the image generation 2D and 3D modes

66 66 LiveViz – Monitoring Your Application LiveViz allows you to watch your application’s progress Can use it from work or home Doesn’t slow down computation when there is no client

67 67 LiveViz - Compilation LiveViz is part of the standard Charm++ distribution – when you build Charm++, you also get LiveViz

68 68 Running LiveViz Build and run the server –cd pgms/charm++/ccs/liveViz/serverpush –Make –./run_server Or in detail…

69 69 Running LiveViz Run the client –cd pgms/charm++/ccs/liveViz/client –./run_client [ [ ]] Should get a result window:

70 70 LiveViz Request Model LiveViz Server Code Client Parallel Application Get Image Poll for Request Poll Request Returns Work Image Chunk Passed to Server Server Combines Image Chunks Send Image to Client Buffer Request

71 71 Main: Setup worker array, pass data to them Workers: Start looping Send messages to all neighbors with ghost rows Wait for all neighbors to send ghost rows to me Once they arrive, do the regular Jacobi relaxation Calculate maximum error, do a reduction to compute global maximum error If timestep is a multiple of 64, load balance the computation. Then restart the loop. Jacobi 2D Example Structure Main: Setup worker array, pass data to them Workers: Start looping Send messages to all neighbors with ghost rows Wait for all neighbors to send ghost rows to me Once they arrive, do the regular Jacobi relaxation Calculate maximum error, do a reduction to compute global maximum error If timestep is a multiple of 64, load balance the computation. Then restart the loop. Main: Setup worker array, pass data to them Workers: Start looping Send messages to all neighbors with ghost rows Wait for all neighbors to send ghost rows to me Once they arrive, do the regular Jacobi relaxation Calculate maximum error, do a reduction to compute global maximum error If timestep is a multiple of 64, load balance the computation. Then restart the loop.

72 72 #include void main::main(...) { // Do misc initilization stuff // Now create the (empty) jacobi 2D array work = CProxy_matrix::ckNew(0); // Distribute work to the array, filling it as you do } #include void main::main(...) { // Do misc initilization stuff // Create the workers and register with liveviz CkArrayOptions opts(0);// By default allocate 0 // array elements. liveVizConfig cfg(true, true);// color image = true and // animate image = true liveVizPollInit(cfg, opts);// Initialize the library // Now create the jacobi 2D array work = CProxy_matrix::ckNew(opts); // Distribute work to the array, filling it as you do } LiveViz Setup

73 73 Adding LiveViz To Your Code void matrix::serviceLiveViz() { liveVizPollRequestMsg *m; while ( (m = liveVizPoll((ArrayElement *)this, timestep)) != NULL ) { requestNextFrame(m); } void matrix::startTimeSlice() { // Send ghost row north, south, east, west,... sendMsg(dims.x-2, NORTH, dims.x+1, 1, +0, -1); } void matrix::startTimeSlice() { // Send ghost row north, south, east, west,... sendMsg(dims.x-2, NORTH, dims.x+1, 1, +0, -1); // Now having sent all our ghosts, service liveViz // while waiting for neighbor’s ghosts to arrive. serviceLiveViz(); }

74 74 Generate an Image For a Request void matrix::requestNextFrame(liveVizPollRequestMsg *m) { // Compute the dimensions of the image bit we’ll send // Compute the image data of the chunk we’ll send – // image data is just a linear array of bytes in row-major // order. For greyscale it’s 1 byte, for color it’s 3 // bytes (rgb). // The liveViz library routine colorScale(value, min, max, // *array) will rainbow-color your data automatically. // Finally, return the image data to the library liveVizPollDeposit((ArrayElement *)this, timestep, m, loc_x, loc_y, width, height, imageBits); }

75 75 OPTS=-g CHARMC=charmc $(OPTS) LB=-module RefineLB OBJS = jacobi2d.o all: jacobi2d jacobi2d: $(OBJS) $(CHARMC) -language charm++ \ -o jacobi2d $(OBJS) $(LB) –lm jacobi2d.o: jacobi2d.C jacobi2d.decl.h $(CHARMC) -c jacobi2d.C OPTS=-g CHARMC=charmc $(OPTS) LB=-module RefineLB OBJS = jacobi2d.o all: jacobi2d jacobi2d: $(OBJS) $(CHARMC) -language charm++ \ -o jacobi2d $(OBJS) $(LB) -lm \ -module liveViz jacobi2d.o: jacobi2d.C jacobi2d.decl.h $(CHARMC) -c jacobi2d.C Link With The LiveViz Library

76 76 LiveViz Summary Easy to use visualization library Simple code handles any number of clients Doesn’t slow computation when there are no clients connected Works in parallel, with load balancing, etc.


Download ppt "1 Charm++ Tutorial Parallel Programming Laboratory, UIUC."

Similar presentations


Ads by Google