Ewa Deelman Using Grid Technologies to Support Large-Scale Astronomy Applications Ewa Deelman Center for Grid Technologies USC Information Sciences Institute
Ewa Deelman Outline Large-scale applications Mapping large-scale applications onto Grid environments –Pegasus (developed by ISI under the GriPhyN project) Supporting Montage (an image mosaicking application) on the Grid Recent results of running on the Teragrid Other applications and conclusions
Ewa Deelman Acknowledgements Pegasus Ewa Deelman, Carl Kesselman, Gaurang Mehta, Gurmeet Singh, Mei-Hui Su, Karan Vahi (Center for Grid Technologies, ISI) James Blythe, Yolanda Gil (Intelligent Systems Division, ISI) Research funded as part of the NSF GriPhyN, NVO and SCEC projects and EU-funded GridLab Montage Bruce Berriman, John Good, Anastasia Laity, IPAC Joseph C. Jacob, Daniel S. Katz, JPL Montage is funded by the NASA’s Earth Science Technology Office, Computational Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology.
Ewa Deelman Grid Applications Increasing in the level of complexity Use of individual application components Reuse of individual intermediate data products Description of Data Products using Metadata Attributes Execution environment is complex and very dynamic –Resources are heterogeneous and distributed in the WAN –Resources come and go because of failure or policy changes –Data is replicated –Components can be found at various locations or staged in on demand Separation between –the application description –the actual execution description
Ewa Deelman
Why Automate Workflow Generation? Usability: –Limit User’s necessary Grid knowledge Monitoring and Directory Service Replica Location Service Complexity: –User needs to make choices Alternative application components Alternative files Alternative locations –The user may reach a dead end –Many different interdependencies may occur among components Solution cost: –Evaluate the alternative solution costs Performance Reliability Resource Usage Global cost: –minimizing cost within a community or a virtual organization –requires reasoning about individual user’s choices in light of other user’s choices
Ewa Deelman Concrete Workflow Generation and Mapping
Ewa Deelman Specifying abstract workflows Using GriPhyN Tools (Chimera) –Using the Chimera Virtual Data Language Writing the abstract workflow directly –Using scripts (write XML) Using high-level workflow composition tools –Component Analysis Tool (CAT), uses ontologies to describe workflow components TRgalMorph( in redshift, in pixScale, in zeroPoint, in Ho, in om, in flat, in image, out galMorph ) {… }
Ewa Deelman Generating a Concrete Workflow Information –location of files and component Instances –State of the Grid resources Select specific –Resources –Files –Add jobs required to form a concrete workflow that can be executed in the Grid environment Data movement –Data registration –Each component in the abstract workflow is turned into an executable job
Ewa Deelman Pegasus: Planning for Execution in Grids Maps from abstract to concrete workflow –Algorithmic and AI-based techniques Automatically locates physical locations for both components (transformations) and data Finds appropriate resources to execute Reuses existing data products where applicable Publishes newly derived data products –Chimera virtual data catalog –Provides provenance information
Ewa Deelman Information Components used by Pegasus Globus Monitoring and Discovery Service (MDS) –Locates available resources –Finds resource properties Dynamic: load, queue length Static: location of GridFTP server, RLS, etc Globus Replica Location Service –Locates data that may be replicated –Registers new data products Transformation Catalog –Locates installed executables
Ewa Deelman Example Workflow Reduction Original abstract workflow If “b” already exists (as determined by query to the RLS), the workflow can be reduced
Ewa Deelman Mapping from abstract to concrete Query RLS, MDS, and TC, schedule computation and data movement
Ewa Deelman Condor’s DAGMan Developed at UW Madison (Livny) Executes a concrete workflow Makes sure the dependencies are followed Executes the jobs specified in the workflow –Execution –Data movement –Catalog updates Provides a “rescue DAG” in case of failure
Ewa Deelman What is Montage? Delivers custom, science grade image mosaics –User specifies projection, coordinates, spatial sampling, mosaic size, image rotation –Preserve astrometry & photometric accuracy –Modular “toolbox” design Loosely-coupled Engines for Image Reprojection, Background Rectification, Co-addition –Control testing and maintenance costs –Flexibility; e.g custom background algorithm; use as a reprojection and co-registration engine Public service will be deployed on the Teragrid –Order mosaics through web portal
Ewa Deelman Montage Portal
Ewa Deelman Small Montage Workflow ~1200 nodes
Ewa Deelman Mosaic of M42 created on the Teragrid resources using Pegasus
Ewa Deelman Node Clustering for Performance ( Gurmeet Singh, ISI ) mProject mDiff mFitplane mConcatFit mBgModel mBackground mAdd Overheads are incurred when scheduling individual nodes of the workflow One way to look at the workflow is by level and then cluster jobs within the level and destined for the same host You can construct as many clusters as there are available processors for example
Ewa Deelman Total time (in minutes) for executing the concrete workflow for creating a mosaic covering 6× 6degrees 2 region centered at M16.
Ewa Deelman Total time taken (in minutes) for executing the concrete workflow as the size of the desired mosaic increases from 1×1 degree 2 to 10×10 degree 2 centered at M16. Number of nodes in The abstract workflow 64 processors used
Ewa Deelman Benefits of the workflow & Pegasus approach The workflow exposes –the structure of the application – maximum parallelism of the application Pegasus can take advantage of the structure to –Set a planning horizon (how far into the workflow to plan) –Cluster a set of workflow nodes to be executed as one Pegasus shields from the Grid details Pegasus can run the workflow on a variety of resources Pegasus can run a single workflow across multiple resources Pegasus can opportunistically take advantage of available resources (through dynamic workflow mapping) Pegasus can take advantage of pre-existing intermediate data products Pegasus can improve the performance of the application.
Ewa Deelman Applications Using Pegasus and DAGMan GriPhyN applications: –High-energy physics: Atlas, CMS (many) –Astronomy: SDSS (Fermi Lab, ANL) –Gravitational-wave physics: LIGO (Caltech, AEI) Astronomy: –Galaxy Morphology (NCSA, JHU, Fermi, many others, NVO-funded) –Montage (IPAC, JPL, NASA-funded) Biology –BLAST (ANL, PDQ-funded) Neuroscience –Tomography for Telescience(SDSC, NIH-funded) Earthquake Science –Simulation of earthquake propagation in soil (in the Southern California area – SCEC)
Ewa Deelman Future directions Improving scheduling strategies Supporting the Pegasus framework through pluggable interfaces for resource and data selection Support for staging in executables on demand Supporting better space and resource management (space and compute node reservation) Reliability
Ewa Deelman For more information NVO project GriPhyN project –Virtual Data Toolkit Montage montage.ipac.caltech.edu (IRSA Booth)montage.ipac.caltech.edu Pegasus pegasus.isi.edupegasus.isi.edu My website
Ewa Deelman The Grid Computational and networking infrastructure –brings together compute resources, data storage system, instruments, human resources Enables entirely new approaches to applications and problem solving –remote resources the rule, not the exception –can solve ever bigger problems Resources are distributed in a wide area Composed of many heterogeneous computing and storage platforms Can consist of single hosts, condor pool, clusters, high- end parallel machines Policies govern the use of the resources Resources may come and go based on hardware/software failures and policy changes
Ewa Deelman Pegasus adds replica nodes for each job that materializes data (g, h, i ). These three nodes are for transferring the output files of the leaf job (f) to the output pool, since job f has been deleted by the Reduction Algorithm. Pegasus Mapping Pegasus schedules job g,h on pool X and job i on pool Y. Hence adding an interpool transfer node KEY The original node Pull transfer node Registration node Push transfer node Node deleted by Reduction algo Inter-pool transfer node Job e Job gJob h Job d Job a Job c Job f Job i Job b Pegasus adds transfer nodes for transferring the input files for the root nodes of the decomposed dag (job g) Implemented by Karan Vahi
Ewa Deelman Pegasus the next Generation Allows to set a planning horizon through the use of partitioning Maps individual partitions to concrete workflows The dependencies between partitions dictate the refinement process A partition is refined only after all the partitions it depends on have been refined and successfully executed Figure 1
Ewa Deelman Planning & Scheduling Granularity Partitioning –Allows to set the granularity of planning ahead Node aggregation –Allows to combine nodes in the workflow and schedule them as one unit (minimizes the scheduling overheads) –May reduce the overheads of making scheduling and planning decisions Related but separate concepts –Small jobs High-level of node aggregation Large partitions –Very dynamic system Small partitions
Ewa Deelman KEY The original node Pull transfer node Registration node Push transfer node Job e Job gJob h Job d Job a Job c Job f Job i Job b Abstract Dag Reduction Pegasus Queries the RLS and finds the data products of jobs d,e,f already materialized. Hence deletes those jobs On applying the reduction algorithm additional jobs a,b,c are deleted