Download presentation
Presentation is loading. Please wait.
Published byBetty Kennedy Modified over 9 years ago
1
Tammy Dahlgren, Tom Epperly, Scott Kohn, & Gary Kumfert
2
GKK 2 CASC Goals l Describe our vision to the CCA l Solicit contributions (code) for: u RMI (SOAP | SOAP w/ Mime types) u Parallel Network Algs (general arrays) l Encourage Collaboration
3
GKK 3 CASC Outline l Background on Components @llnl.gov l General MxN Solution : bottom-up u Initial Assumptions u MUX Component u MxNRedistributable interface l Parallel Handles to a Parallel Distributed Component l Tentative Research Strategy
4
GKK 4 CASC Components @llnl.gov l Quorum - web voting l Alexandria - component repository l Babel - language interoperability u maturing to platform interoperability … implies some RMI mechanism SOAP | SOAP w/ MIME types open to suggestions, & contributed sourcecode
5
GKK 5 CASC Babel & MxN problem l Unique Opportunities u SIDL communication directives u Babel generates code anyway u Users already link against Babel Runtime Library u Can hook directly into Intermediate Object Representation (IOR)
6
GKK 6 CASC Impls and Stubs and Skels Application Stubs Skels IORs Impls l Application: uses components in user’s language of choice l Client Side Stubs: translate from application language to C l Internal Object Representation: Always in C l Server Side Skeletons: translates IOR (in C) to component implementation language l Implementation: component developers choice of language. (Can be wrappers to legacy code)
7
GKK 7 CASC Out of Process Components Application Stubs IPC IORs Skels IORs Impls IPC
8
GKK 8 CASC Internet Remote Components Application Stubs Marshaler IORs Line Protocol Unmarshaler Skels IORs Impls
9
GKK 9 CASC Outline l Background on Components @llnl.gov l General MxN Solution : bottom-up u Initial Assumptions u MUX Component u MxNRedistributable interface l Parallel Handles to a Parallel Distributed Component l Tentative Research Strategy
10
GKK 10 CASC Initial Assumptions l Working Point-to-Point RMI l Object Persistence
11
GKK 11 CASC Example #1: 1-D Vectors 0.0, 1.1 p0 x 2.2, 3.3 p1 x 4.4, 5.5 p2 x.9,.8,.7 p3 y.6,.5,.4 p4 y double d = x.dot( y );
12
GKK 12 CASC Example #1: 1-D Vectors p0 x p1 x p2 x p3 y p4 y double d = x.dot( y );
13
GKK 13 CASC Rule #1: Owner Computes double vector::dot( vector& y ) { // initialize double * yData = new double[localSize]; y.requestData( localSize, local2global, yData); // sum all x[i] * y[i] double localSum = 0.0; for( int i=0; i<localSize; ++i ) { localSum += localData[i] * yData[i]; } // cleanup delete[] yData; return localMPIComm.globalSum( localSum ); }
14
GKK 14 CASC Design Concerns l vector y is not guaranteed to have data mapped appropriately for dot product. l vector y is expected to handle MxN data redistribution internally u Should each component implement MxN redistribution? y.requestData( localSize, local2global, yData);
15
GKK 15 CASC Outline l Background on Components @llnl.gov l General MxN Solution : bottom-up u Initial Assumptions u MUX Component u MxNRedistributable interface l Parallel Handles to a Parallel Distributed Component l Tentative Research Strategy
16
GKK 16 CASC Vector Dot Product: Take #2 double vector::dot( vector& y ) { // initialize MUX mux( *this, y ); double * yData = mux.requestData( localSize, local2global ); // sum all x[i] * y[i] double localSum = 0.0; for( int i=0; i<localSize; ++i ) { localSum += localData[i] * yData[i]; } // cleanup mux.releaseData( yData ); return localMPIComm.globalSum( localSum ); }
17
GKK 17 CASC Generalized Vector Ops vector ::parallelOp( vector & y ) { // initialize MUX mux( *this, y ); vector newY = mux.requestData( localSize, local2global ); // problem reduced to a local operation result = x.localOp( newY ); // cleanup mux.releaseData( newY ); return localMPIComm.reduce( localResult ); }
18
GKK 18 CASC Rule #2: MUX distributes data l Users invoke parallel operations without concern to data distribution l Developers implement local operation assuming data is already distributed l Babel generates code that reduces a parallel operation to a local operation l MUX handles all communication l How general is a MUX?
19
GKK 19 CASC Example #2: Undirected Graph 12 0 3 12 0 345 678 910 11 678 910 11 12 0 345 45 67 8 910 11
20
GKK 20 CASC Key Observations l Every Parallel Component is a container and is divisible to subsets. l There is a minimal (atomic) addressable unit in each Parallel Component. l This minimal unit is addressable in global indices.
21
GKK 21 CASC Atomicity l Vector (Example #1): u atom - scalar u addressable - integer offset l Undirected Graph (Example #2): u atom - vertex with ghostnodes u addressable - integer vertex id l Undirected Graph (alternate): u atom - edge u addressable - ordered pair of integers
22
GKK 22 CASC Outline l Background on Components @llnl.gov l General MxN Solution : bottom-up u Initial Assumptions u MUX Component u MxNRedistributable interface l Parallel Handles to a Parallel Distributed Component l Tentative Research Strategy
23
GKK 23 CASC MxNRedistributable Interface interface Serializable { store( in Stream s ); load( in Stream s ); }; interface MxNRedistributable extends Serializable { int getGlobalSize(); local int getLocalSize(); local array getLocal2Global(); split ( in array maskVector, out array pieces); merge( in array pieces); };
24
GKK 24 CASC Rule #3: All Parallel Components implement “MxNRedistributable” l Provides standard interface for MUX to manipulate component l Minimal coding requirements to developer l Key to abstraction u split() u merge() l Manipulates “atoms” by global address
25
GKK 25 CASC Now for the hard part...... 13 slides illustrating how it all fits together for an Undirected Graph
26
GKK 26 CASC pid=0 pid=1 %> mpirun -np 2 graphtest 12 0 345 678 910 11
27
GKK 27 CASC BabelOrb * orb = BabelOrb.connect( “http://...”); 12 0 345 678 910 11 pid=0 pid=1 1 2 orb 3 4
28
GKK 28 CASC Graph * graph = orb-> create(“graph”,3); 12 0 345 678 910 11 orb graph MUX 1 2 3 4
29
GKK 29 CASC graph MUX graph MUX graph MUX graph->load(“file://...”); 12 0 345 678 910 11 12 0 345 6 5 678 910 11 2 345 678 910 orb graph MUX graph MUX Fancy MUX Routing 1 2
30
GKK 30 CASC graph->doExpensiveWork(); 12 0 345 6 12 0 345 678 910 11 2 345 678 910 5 678 9 11 orb graph MUX graph MUX 12 0 345 6 2 345 678 910 5 678 9 11 12 0 345 6 2 345 678 910 5 678 9 11
31
GKK 31 CASC PostProcessor * pp = new PostProcessor; 12 0 345 6 12 0 345 678 910 11 2 345 678 910 5 678 9 11 orb graph MUX graph MUX pp
32
GKK 32 CASC pp->render( graph ); 12 0 345 6 12 0 345 678 910 11 2 345 678 910 5 678 9 11 orb graph MUX graph MUX pp l MUX queries graph for global size (12) l Graph determines particular data layout (blocked) 012345012345 6 7 8 9 10 11 require l MUX is invoked to guarantee that layout before render implementation is called
33
GKK 33 CASC MUX solves general parallel network flow problem ( client & server ) 12 0 345 6 12 0 345 678 910 11 2 345 678 910 5 678 9 11 orb graph MUX graph MUX pp 012345012345 6 7 8 9 10 11 require 0, 1, 2, 3 4, 5 6, 7 8, 9, 10, 11
34
GKK 34 CASC MUX opens communication pipes 12 0 345 6 12 0 345 678 910 11 2 345 678 910 5 678 9 11 orb graph MUX graph MUX pp 012345012345 6 7 8 9 10 11 require 0, 1, 2, 3 4, 5 6, 7 8, 9, 10, 11
35
GKK 35 CASC MUX splits graphs with multiple destinations (server-side) 12 0 345 6 12 0 345 678 910 11 2 345 678 910 5 678 9 11 orb graph MUX graph MUX pp 012345012345 6 7 8 9 10 11 require 0, 1, 2, 3 4, 5 6, 7 8, 9, 10, 11 2 345 678 345 678 910
36
GKK 36 CASC MUX sends pieces through communication pipes (persistance) 12 0 345 6 12 0 345 678 910 11 2 345 678 910 orb graph MUX graph MUX pp 012345012345 6 7 8 9 10 11 require 5 678 910 11 2 345 678 345 678 910 12 0 345 6 5 678 9 11
37
GKK 37 CASC MUX receives graphs through pipes & assembles them (client side) 12 0 345 6 12 0 345 678 910 11 2 345 678 910 orb graph MUX graph MUX pp 012345012345 6 7 8 9 10 11 require 5 678 910 11 2 345 678 345 678 910 12 0 345 6 5 678 9 11 345 678 910 11 12 0 345 678
38
GKK 38 CASC pp -> render_impl( graph ); (user’s implementation runs) 12 0 345 6 12 0 345 678 910 11 345 678 910 11 12 0 345 678 2 345 678 910 5 678 9 11
39
GKK 39 CASC Outline l Background on Components @llnl.gov l General MxN Solution : bottom-up u Initial Assumptions u MUX Component u MxNRedistributable interface l Parallel Handles to a Parallel Distributed Component l Tentative Research Strategy
40
GKK 40 CASC Summary l All distributed components are containers and subdivisable l The smallest globally addressable unit is an atom l MxNRedistributable interface reduces general component MxN problem to a 1-D array of ints l MxN problem is a special case of the general problem N handles to M instances l Babel is uniquely positioned to contribute a solution to this problem
41
GKK 41 CASC Outline l Background on Components @llnl.gov l General MxN Solution : bottom-up u Initial Assumptions u MUX Component u MxNRedistributable interface l Parallel Handles to a Parallel Distributed Component l Tentative Research Strategy
42
GKK 42 CASC Tentative Research Strategy l Java only, no Babel u serialization & RMI built-in l Build MUX l Experiment l Write Paper l Finish 0.5.x line l add serialization l add RMI l Add in technology from Fast Track Fast TrackSure Track
43
GKK 43 CASC Open Questions l Non-general, Optimized Solutions l Client-side Caching issues l Fault Tolerance l Subcomponent Migration l Inter vs. Intra component communication l MxN, MxP, or MxPxQxN
44
GKK 44 CASC MxPxQxN Problem Long-Haul Network
45
GKK 45 CASC MxPxQxN Problem Long-Haul Network
46
GKK 46 CASC The End
47
GKK 47 CASC Work performed under the auspices of the U. S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48 UCRL-VG-142096
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.