Download presentation
Presentation is loading. Please wait.
1
2000-2001 NPACI Alpha Project Review: Cellular Microphysiology on the Data Grid Fran Berman, UCSD Tom Bartol, Salk Institute
2
“MCell” Alpha project Project leaders: Terry Sejnowski, Salk Institute, Fran Berman, UCSD Senior Participants: –Tom Bartol, Salk –Joel Stiles, CMU (leveraged) –Edwin Salpeter, Cornell (leveraged) –Jack Dongarra, Rich Wolski, U. of Tenn. –Mark Ellisman, UCSD NCMIR –Henri Casanova, UCSD CSE (leveraged)
3
MCell Alpha Goals General goal: Implementation and deployment of MCell, a general Monte Carlo simulator of cellular microphysiology using NPACI high performance and distributed resources Specific goals: 1.Develop a Grid-enabled version of MCell available to all MCell and NPACI users 2.Develop an MPI/Open MP version suitable for MPP platforms such as Blue Horizon 3.Perform large-scale runs necessary for new disciplinary results 4.Extend the prototype and tech transfer APST user-level middleware for deploying MCell and similar parameter sweep applications to NPACI partners
4
MCell Alpha Project Previous Accomplishments –Prototype of Grid-enabled MCell code (via APST Grid middleware) developed SW integrates NetSolve, AppLeS, NWS –Initial MCell runs performed on Blue Horizon Agenda for this presentation: –Tom: What is MCell and what are its computational requirements? –Fran: How do we develop SW for performance- efficient distributed MCell runs? –FY 00-01 plans and feedback
5
Tom’s Presentation
6
Grid-enabled MCell Previous work –Have developed prototype of APST (AppLeS Parameter Sweep Template) which can be used to deploy MCell in wide-area Grid environments Includes mechanism for targeting available services at remote resources (NetSolve, Globus, GASS, IBP, NWS) –Have developed a Grid MCell performance model –Have developed performance-efficient Grid-oriented scheduling heuristics for MCell NPACI Alpha Project Goals 1.Develop a Grid-enabled version of MCell with enhanced scheduling algorithms, I/O, and data storage model available targeted to MCell users and NPACI resources. 2.Extend the prototype and tech transfer APST user-level middleware for deploying MCell and similar parameter sweep applications to NPACI partners 3.Develop an MPI/Open MP version of MCell suitable for MPP platforms such as Blue Horizon 4.Perform large-scale runs necessary for new disciplinary results
7
Grid-enabled MCell Have performed initial wide-area MCell runs using APST prototype APST –APST = AppLeS Parameter Sweep Template –MCell used as driving application –Developed as user-level Grid middleware for scheduling and deploying MCell and other parameter sweep applications –Joint work with Henri Casanova –Research supported by NASA and NSF
8
Scheduling Issues for MCell Large shared files may complicate the scheduling process Post-processing must minimize file transfer time Adaptive scheduling necessary to account for dynamic environment
9
Contingency Scheduling: Allocation developed by dynamically generating a Gantt chart for scheduling unassigned tasks between scheduling events Basic skeleton 1.Compute the next scheduling event 2.Create a Gantt Chart G 3.For each computation and file transfer currently underway, compute an estimate of its completion time and fill in the corresponding slots in G 4.Select a subset T of the tasks that have not started execution 5.Until each host has been assigned enough work, heuristically assign tasks to hosts, filling in slots in G 6.Implement schedule Scheduling Approach used for MCell 1 2 1 2 1 2 Network links Hosts (Cluster 1) Hosts (Cluster 2) Time Resources Computation G Scheduling event Scheduling event Computation
10
NetSolve Globus Legion NWS Ninf IBP Condor APST/MCell user-level middleware transport APIexecution API metadata API scheduler API Grid Resources and Middleware APST/MCell Daemon GASSIBP NFS GRAMNetSolve Condor, Ninf, Legion,.. NWS Workqueue Gantt chart heuristic algorithms Workqueue++ MinMinMaxMinSufferageXSufferage APST/MCell Client Controller interacts Command-line client Metadata Bookkeeper Actuator Scheduler triggers transferexecutequery store actuate report retrieve
11
MCell Computational Challenges Support for large-scale distributed MCell runs Support for large-scale parallel MCell runs Execution of large-scale runs Tech Transfer of APST for NPACI parameter sweep application developers User-level middleware facilitates use of Grid for wider class of users MCell algorithm and SW development allows for new disciplinary results
12
FY 00-01 Plans Develop Grid-enabled MCell –Optimize scheduling strategy –Increase sensitivity of model to environmental constraints (data storage, I/O, post-processing, resource location) –Target SW to NPACI resources Robustify and tech transfer more general APST user- level middleware to NPACI metasystem Develop MPP-enabled MPI/OpenMP version of MCell –Adapt algorithm for performance in MPP environment –Develop MPP-enabled APST to efficiently deploy MCell tasks to parallel environments –Implement/deploy SW for large-scale Blue Horizon runs Perform larger-scale runs necessary for new disciplinary results
13
Feedback It would be easier to perform this work for NPACI if … –Allocation of NS thrust area computer time was more generous –Blue Horizon had larger scratch space –A rendering farm were available –Globus platform was more stable –NWS, NetSolve, and other services were more consistently available at NPACI partner sites
14
Previous work on Scheduling Heuristics Self-scheduling Algorithms workqueue workqueue w/ work stealing workqueue w/ work duplication... Gantt chart heuristics: MinMin, MaxMin Sufferage, XSufferage... Scheduling Algorithms for MCell Easy to implement and quick No need for performance predictions Insensitive to data placement More difficult to implement Needs performance predictions Sensitive to data placement Simulation results (HCW ’00 paper, SC’00 paper) show that: Gantt chart heuristics are worth it Xsufferage is good heuristic even when predictions are bad Complex environments require better planning (Gantt chart) Gantt Chart Algorithms Min-min Max-min Sufferage, XSufferage
15
1.How frequent should the scheduling events be? 2.Which set of tasks should we schedule between scheduling events? 3.How accurate do our estimates of computation and data transfer times need to be? 4.What scheduling heuristics should we use? 5.How do input and output location and visualization requirements impact scheduling? Research Scheduling Issues G 1 2 1 2 1 2 Network links Hosts (Cluster 1) Hosts (Cluster 2) Time Resources Computation Scheduling event Scheduling event Computation
16
Features Scheduler can be used for structurally similar set of Parameter Sweep Applications in addition to MCell –INS2D, INS3D (NASA Fluid Dynamics applications) –Tphot (SDSC, Proton Transport application) –NeuralObjects (NSI, Neural Network simulations) –CS simulation applications for our own research (Model validation) Actuator’s APIs are interchangeable and mixable –(NetSolve+IBP) + (GRAM+GASS) + (GRAM+NFS) Scheduler allows for dynamic adaptation, multithreading No Grid software is required –However lack of it (NWS, GASS, IBP) may lead to poorer performance APST is being beta-tested by on NASA IPG and at other sites
17
Preliminary Results Scheduling Mcell on the Grid Experimental Setting: Mcell simulation with 1,200 tasks: composed of 6 Monte-Carlo simulations input files: 1, 1, 20, 20, 100, and 100 MB 4 scenarios: Initially (a) all input files are only in Japan (b) 100MB files replicated in California (c) in addition, one 100MB file replicated in Tennessee (d) all input files replicated everywhere workqueue Gantt-chart algs
18
Evaluation of APST MCell Scheduling Heuristics Wanted to evaluate Mcell scheduling heuristics Experiment: –We ran large-sized instances of MCell across a distributed platform and compared execution times with both self-scheduling and Gantt chart heuristics. University of Tennessee, Knoxville NetSolve + IBP University of California, San Diego GRAM + GASS Tokyo Institute of Technology NetSolve + NFS NetSolve + IBP APST Daemon APST Client
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.