Download presentation
Presentation is loading. Please wait.
Published byOscar Armstrong Modified over 9 years ago
2
Denis Caromel 1 Denis Caromel, et al. http://ProActive.ObjectWeb.org OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis, IUF Open Source Middleware for Grid and Parallelism: 1.GRIDs and Parallelism 2.Active Objects + Groups 3.Components 4.Deployment 5.GUI 6.Applications (BLAST)
3
Denis Caromel 2 ProActive Parallel Suite (1) Open Source + PROFESSIONAL SUPPORT
4
Denis Caromel 3 1. The GRIDs PCs : 1 Billion in 2002 (25 years) Forecast: 2 Billions in 2008
5
Denis Caromel 4 The Grid Concept GRID = Electric Network Computer Power (CPU cycles) Electricity Can hardly be stored, if not used Lost Global management, Mutual sharing of the resource 2 important aspects : 1. Diversity 2. Computational + Data Grid
6
Denis Caromel 5 5 Enterprise Grids Internet EJBServlets Apache Databases
7
Denis Caromel 6 6 Scientific Grids Internet Clusters Parallel Machine Large Equipment
8
Denis Caromel 7 7 Internet Grids Internet Job management for embarrassingly parallel application (e.g. SETI)
9
Denis Caromel 8 8 Intranet Grids - Desktop Grids Internet Using Spare Cycles of Desktops Potentially: - Non-embarrassingly parallel application - Across several sites How to Program ? How to Deploy ?
10
Denis Caromel 9 9 The multiple GRIDs Scientific Grids : Enterprise Grids : Internet Grids, (miscalled P2P grid): Intranet Desktop Grids Strong convergence needed and somehow in progress! 2 New Disturbing Factors: End of Moore Law and Multi-Cores For SBGrid: The Multiple Scientific GRIDs 2D and 3D graphics SB workstations EM laboratory Clusters Global grids and other federally funded equipments
11
Denis Caromel 10 Grid Computing with ProActive Boston Nice Beijing Shanghai Hierarchical Deployment Challenges: Programming Model, Scale, Latency, Heterogeneity, Versatility (FT, protocols, firewalls, etc.)
12
Denis Caromel 11 2. Distributed and Parallel Objects ProActive Programming
13
Denis Caromel 12 ProActive Parallel Suite
14
Denis Caromel 13 ProActive Parallel Suite
15
Denis Caromel 14 A ProActive : Active objects Proxy Java Object A ag = newActive (“A”, […], VirtualNode) V v1 = ag.foo (param); V v2 = ag.bar (param);... v1.bar(); //Wait-By-Necessity V Wait-By-Necessity is a Dataflow Synchronization JVM A Active Object Future Object Request Req. Queue Thread v1 v2 ag WBN!
16
Denis Caromel 15 ProActive : Explicit Synchronizations Explicit Synchronization: - ProActive. isAwaited (v); // Test if available -. waitFor (v); // Wait until availab. Vectors of Futures: -. waitForAll (Vector); // Wait All -. waitForAny (Vector); // Get First A ag = newActive (“A”, […], VirtualNode) V v = ag.foo(param);... v.bar(); //Wait-by-necessity
17
Denis Caromel 16 Wait-By-Necessity: First Class Futures ba Futures are Global Single-Assignment Variables V= b.bar () c c c.gee (V) v v b
18
Denis Caromel 17 Proofs in GREEK
19
Denis Caromel 18 Standard system at Runtime: No Sharing NoC: Network On Chip
20
Denis Caromel 19 TYPED ASYNCHRONOUS GROUPS
21
Denis Caromel 20 A Creating AO and Groups Typed Group Java or Active Object A ag = newActiveGroup (“A”, […], VirtualNode) V v = ag.foo(param);... v.bar(); //Wait-by-necessity V Group, Type, and Asynchrony are crucial for Cpt. and GRID JVM
22
Denis Caromel 21 Broadcast and Scatter JVM ag cg ag.bar(cg); // broadcast cg ProActive.setScatterGroup(cg) ; ag.bar(cg); // scatter cg c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3 c1 c2 c3 s c1 c2 c3 s Broadcast is the default behavior Use a group as parameter, Scattered depends on rankings
23
Denis Caromel 22 Dynamic Dispatch Group JVM ag cg c1 c2 c3 c4 c5 c6 c7 c8c0 c9c1 c2 c3 c4 c5 c6 c7 c8c0 c9 c1 c2 c3 c4 c5 c6 c7 c8c0 c9 Slowest Fastest ag.bar(cg);
24
Denis Caromel 23 ProActive Parallel Suite
25
Denis Caromel 24 ProActive Parallel Suite
26
Denis Caromel 25 Parallel, Distributed, Hierarchical 3. Components for the Grid Composing, Wrapping code to make them fit for the Grid
27
Denis Caromel 26 A Provide/Require Component My Business Component Component interface Interface Event sources Event sinks Attributes Reference OFFERED REQUIRED Courtesy of Philippe Merle, Lille, OpenCCM platform
28
Denis Caromel 27 Provide + Use, but flat assembly Building Component Applications = Assembling Components with XML ADL
29
Denis Caromel 28 Objects to Distributed Components (1) Typed Group Java or Active Object ComponentIdentity Cpt = newActiveComponent (params); A a = Cpt ….getFcInterface ("interfaceName"); V v = a.foo(param); V A Example of component instance JVM Truly Distributed Components IoC: Inversion Of Control (set in XML)
30
Denis Caromel 29 A A B C P Group proxy A B C D Groups in Components (1) Broadcast at binding, on client interface At composition, on composite inner server interface A parallel component!
31
Denis Caromel 30 GCM: Grid Component Model A Strong Grid Standard with ETSI: European Telecommunications Standards Institute
32
GCM Scopes and Objectives: Grid Codes that Compose and Deploy No programming, No Scripting, … No Pain Innovation: Abstract Deployment Composite Components Multicast and GatherCast MultiCast GatherCast
33
Denis Caromel 32 GCM features provided in ProActive: GCM Fractal ADL (XML) (Architecture Description Language) GCM Management (Java, C, WSDL API) GCM Application Description (XML) GCM Interoperability Deployment (XML)
34
Denis Caromel 33 Ho to use GCM & Components for SBGrid ? Description of a piece of code to make it fit for the Grid (& scheduling): Required: Hardware, OS, Configuration Input needed Output Produced Optional, on a case by case basis: Wrapping the code in a portable manner, adding necessary control Description of Services Offered, and Required (CP, + Services) Making it possible to compose SBGrid software
35
Denis Caromel 34 ProActive Parallel Suite
36
Denis Caromel 35 ProActive Parallel Suite
37
Denis Caromel 36 ProActive Parallel Suite (1)
38
Denis Caromel 37 ProActive Parallel Suite (1)
39
Denis Caromel 38 4. Deployment Deploying
40
Denis Caromel 39 How to deploy on the Various Kinds of Grid Internet Clusters Parallel Machine Large Equipment Internet Job management for embarrassingly parallel application (e.g. SETI) Internet Servlets EJBsDatabase s
41
Denis Caromel 40 Abstract Deployment Model Problem: Difficulties and lack of flexibility in deployment Avoid scripting for: configuration, getting nodes, connecting, etc. A key principle: Virtual Node (VN) + XML deployment file Abstract Away from source code, and Wrapping code: Machines Creation Protocols Lookup and Registry Protocols Protocols and infrastructures: Globus, ssh, rsh, LSF, PBS, … Web Services, WSRF,...
42
Denis Caromel 41 Physical infrastructure JVM3 JVM2 JVM4 JVM5 computer2 computer3 Runtime GCM XML Deployment Descriptor Abstract deployment model Separates design from deployment infrastructure Virtual Nodes (VN) Dynamic enactment of the deployment, from the application Host names Creation protocols Lookup, Registration Multi-Cores Virtual architecture Source code VN1 VN2
43
Denis Caromel 42 Data management: File transfer Integrated with Deployment: Resource acquisition file transfer when available: unicore, nordugrid, Globus Other non-deployment protocols: scp, rcp, … ProActive own protocol when other fails: ProActive Failsafe File Transfer ProActive supports the following transfers: –Push: To a remote node –Pull: From a remote node –Triangle: Triggered from node A, occurring between node B C It can be used at: –Deployment time (XML) –Execution at any time (API) –Retrieval time (XML)
44
Denis Caromel 43 File Transfer in Deployment Descriptors
45
Denis Caromel 44 API: Asynchronous File Transfer with Futures Immediately returns a File Future to the caller (subject to Wait-By-Necessity) VN RETRIEVE: pullFile is invoked, and an array of futures (File[]) is returned Futures: increase the performance when transferring files between peers: Outperforms rcp when Pushing while Pulling a file!
46
Denis Caromel 45 Performance Comparison 100Mbit LAN network with a 0.25[ms] ping, network configured at 10[Mbitsec] duplex First, ProActive failsafe and native scp perform the same Second, ProActive deploys the files in parallel to the nodes, faster than scp: several invocations of push, on a set of nodes, are executed in parallel by those nodes
47
Denis Caromel 46 GUI in ProActive Parallel Suite
48
Denis Caromel 47 GUI in ProActive Parallel Suite
49
Denis Caromel 48 5. IC2D Interactive Control & Debug for Distribution Eclipse GUI for the GRID
50
Denis Caromel 49 Programmer Interface for Analyzing and Optimizing
51
Denis Caromel 50
52
Denis Caromel 51 IC2D
53
Denis Caromel 52 TimIt Automatic Timers in IC2D Profile your application in real-time
54
Denis Caromel 53 Time Line
55
Denis Caromel 54 Scaling Up Graphical Views: 120 Active Objects on a single Host
56
Denis Caromel 55 Scaling Up Graphical Views Master/Workers on 26 Hosts
57
Denis Caromel 56 Pies for Analysis and Optimization
58
Denis Caromel 57 Pies for Analysis and Optimization
59
Denis Caromel 58 With Summary Report
60
Denis Caromel 59 Scheduler and Resource Manager: User Interface
61
Denis Caromel 60 Scheduler: User Interface
62
Denis Caromel 61 Scheduler: Resource Manager Interface
63
Denis Caromel 62 Ongoing Work: 3D View in IC2D
64
Denis Caromel 63 Ongoing Work: Integration with JMX JConsole: Monitoring Heap, Threads, CPU Over 60 Hours
65
Denis Caromel 64 6. Example of ProActive Applications
66
Denis Caromel 65 Parallel BLAST with ProActive (1) together with Mario Leyton Basic Local Alignment Search Tool for rapid sequence comparison BLAST developed by NCBI (National Center for Biotechnology Information) Standard native code package, no source modification! With PPS Skeletons parallelization and distribution added to the application A seamless deployment on all Grid platforms is obtained: Input Files are automatically copied to computational nodes at Job submission Result Files will be copied on client host BLAST Skeleton program using the Divide and Conquer skeleton: Division of Database based on conditions (Nb. Nodes, Size, etc.)
67
Denis Caromel 66 Parallel BLAST with ProActive (2) BLAST Skeleton program using the Divide and Conquer skeleton: Skeleton root; /* Format the query and database files */ Pipe formatFork = new Pipe (new ExecuteFormatDB(), new ExecuteFormatQuery()); /* Blast a database * 2.1 Format the database * 2.2 Blast the database */ Pipe blastPipe = new Pipe (formatFork, new Seq (new ExecuteBlast())); /* 1 Divide the database * 2 Blast the database with the query * 3 Conquer the query results */ root = new DaC (new DivideDB(), new DivideDBCondition(), blastPipe, new ConquerResults());
68
Denis Caromel 67 Distributed BLAST Overheads
69
Denis Caromel 68 Speedup of Distributed BLAST on Grid5000
70
Denis Caromel 69 Sylvain Cussat-Blanc, Yves Duthen – IRIT TOULOUSE Artificial Life Generation ApplicationJ+1J+5J+6J+7 Version ProActive 251 300 CPUs Initial Application 1 PC56h52 => Crash! ProActive Version 300 CPUs19 minutes Developpement of artificial creatures
71
Denis Caromel 70 JECS : 3D Electromagnetism Radar Reflection on Planes
72
Denis Caromel 71 Code Coupling : Vibro Acoustic (courtesy of EADS)
73
Denis Caromel 72 N-Body Particules
74
Denis Caromel 73 Monte Carlo Simulations, Non-Linear Physics, INLN
75
Denis Caromel 74 Scilab Grid Toolkit
76
Denis Caromel 75 Mikros Image: Post Production
77
Denis Caromel 76 Summary
78
Denis Caromel 77 Multi-Core to Distributed Concurrency + Parallelism Multi-Cores + Distribution
79
Denis Caromel 78 Conclusion: Why does it scale? Thanks to a few key features: Connection-less, RMI+JMS unified Messages rather than long-living interactions
80
Denis Caromel 79 Conclusion: Why does it Compose? Thanks to a few key features: Because it Scales: asynchrony ! Because it is Typed: RMI with interfaces ! First-Class Futures: No unstructured Call Backs and Ports
81
Denis Caromel 80 Summary-Perspective: Comprehensive Toolkit Legacy Wrapping + //, Dist. for Grid
82
Denis Caromel 81
83
Denis Caromel 82 ProActive Parallel Suite (1)
84
Denis Caromel 83 ProActive Parallel Suite (1)
85
Denis Caromel 84 ProActive Parallel Suite (1)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.