Pegasus: Mapping Scientific Workflows onto the Grid Ewa Deelman Center for Grid Technologies USC Information Sciences Institute
Ewa DeelmanInformation Sciences Institute Pegasus Acknowledgements l Ewa Deelman, Carl Kesselman, Saurabh Khurana, Gaurang Mehta, Sonal Patil, Gurmeet Singh, Mei- Hui Su, Karan Vahi (Center for Grid Computing, ISI) l James Blythe, Yolanda Gil (Intelligent Systems Division, ISI) l Collaboration with Miron Livny (UW Madison) l l Research funded as part of the NSF GriPhyN, NVO and SCEC projects and EU-funded GridLab
Ewa DeelmanInformation Sciences Institute Outline l Workflow Management in Grids l Pegasus, Planning for Execution in Grids l Applications Using Pegasus l In-time planning l Future Research Directions
Ewa DeelmanInformation Sciences Institute Grid Applications l Increasing in the level of complexity l Use of individual application components l Reuse of individual intermediate data products (files) l Description of Data Products using Metadata Attributes l Execution environment is complex and very dynamic u Resources come and go u Data is replicated u Components can be found at various locations or staged in on demand l Separation between u the application description u the actual execution description
Ewa DeelmanInformation Sciences Institute Abstract Workflow Generation Concrete Workflow Generation
Ewa DeelmanInformation Sciences Institute Why Automate Workflow Generation? l Usability: Limit User’s necessary Grid knowledge l Monitoring and Directory Service l Replica Location Service l Complexity: u User needs to make choices l Alternative application components l Alternative files l Alternative locations u The user may reach a dead end u Many different interdependencies may occur among components l Solution cost: u Evaluate the alternative solution costs l Performance l Reliability l Resource Usage l Global cost: u minimizing cost within a community or a virtual organization u requires reasoning about individual user’s choices in light of other user’s choices
Ewa DeelmanInformation Sciences Institute GriPhyN’s Executable Workflow Construction l Build an abstract workflow based on VDL descriptions (Chimera) l Build an executable workflow based on the abstract workflows (Pegasus) l Execute the workflow (Condor’s DAGMan)
Ewa DeelmanInformation Sciences Institute VDL and Abstract Workflow VDL descriptions User request data file “c” Abstract Workflow
Ewa DeelmanInformation Sciences Institute Condor’s DAGMan l Developed at UW Madison (Livny) l Executes a concrete workflow l Makes sure the dependencies are followed l Execute the jobs specified in the workflow u Execution u Data movement u Catalog updates l Provides a “rescue DAG” in case of failure
Ewa DeelmanInformation Sciences Institute Pegasus: Planning for Execution in Grids l Maps from abstract to concrete workflow u Algorithmic and AI-based techniques l Automatically locates physical locations for both components (transformations) and data l Finds appropriate resources to execute l Reuses existing data products where applicable l Publishes newly derived data products u Chimera virtual data catalog u Provides provenance information
Ewa DeelmanInformation Sciences Institute Information Components Used by Pegasus l Globus Monitoring and Discovery Service (MDS) u Locates available resources u Finds resource properties l Dynamic: load, queue length l Static: location of gridftp server, RLS, etc l Globus Replica Location Service u Locates data that may be replicated u Registers new data products l Transformation Catalog u Locates installed executables
Ewa DeelmanInformation Sciences Institute Example Workflow Reduction l Original abstract workflow l If “b” already exists (as determined by query to the RLS), the workflow can be reduced
Ewa DeelmanInformation Sciences Institute Mapping from abstract to concrete l Query RLS, MDS, and TC, schedule computation and data movement
Ewa DeelmanInformation Sciences Institute Montage l Montage (NASA and NVO) u Deliver science-grade custom mosaics on demand u Produce mosaics from a wide range of data sources (possibly in different spectra) u User-specified parameters of projection, coordinates, size, rotation and spatial sampling. Mosaic created by Pegasus based Montage from a run of the M101 galaxy images on the Teragrid.
Ewa DeelmanInformation Sciences Institute Small Montage Workflow ~1200 nodes
Ewa DeelmanInformation Sciences Institute Montage Acknowledgments l Bruce Berriman, John Good, Anastasia Laity, Caltech/IPAC l Joseph C. Jacob, Daniel S. Katz, JPL l caltech.edu/ l Testbed for Montage: Condor pools at USC/ISI, UW Madison, and Teragrid resources at NCSA, PSC, and SDSC. Montage is funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computational Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology.
Ewa DeelmanInformation Sciences Institute Applications Using Chimera, Pegasus and DAGMan l GriPhyN applications: u High-energy physics: Atlas, CMS (many) u Astronomy: SDSS (Fermi Lab, ANL) u Gravitational-wave physics: LIGO (Caltech, AEI) l Astronomy: u Galaxy Morphology (NCSA, JHU, Fermi, many others, NVO-funded) l Biology u BLAST (ANL, PDQ-funded) l Neuroscience u Tomography for Telescience(SDSC, NIH-funded)
Ewa DeelmanInformation Sciences Institute Current System
Ewa DeelmanInformation Sciences Institute time Levels of abstraction Application -level knowledge Logical tasks Tasks bound to resources and sent for execution User’s Request Relevant components Full abstract workflow Partial execution Not yet executed Workflow refinement Task matchmaker Workflow repair Policy info Workflow Refinement and execution
Ewa DeelmanInformation Sciences Institute Incremental Refinement l Partition Abstract workflow into partial workflows
Ewa DeelmanInformation Sciences Institute Meta-DAGMan
Ewa DeelmanInformation Sciences Institute Conclusions l Pegasus maps complex workflows onto the Grid l Uses Grid information services to find resources, data and executables l Reduces the workflow based on existing intermediate products l Used in many applications l Part of GriPhyN’s Virtual Data Toolkit
Ewa DeelmanInformation Sciences Institute Future Directions l Investigate various scheduling techniques l Investigating fault tolerance issues l Enable flexible interactions between workflow refiners (GriPhyN-wide scope: Pegasus, DAGMan) l GGF10 workshop on workflow management l GGF Workflow management research group
Ewa DeelmanInformation Sciences Institute Summary: The Grid Now l Syntax-based matchmaking of resources to job requirements u Condor matchmaker u Attribute based discovery and selection l Scheduling of jobs based on Grid-able users that specify job execution sequences and computing requirements u Scripting languages u Workflow languages, u Task graphs l Explicit mappings from task to jobs, simple job brokers l Explicit service negotiation and recovery strategies The Future Grid l Knowledge-based reasoning about resources enables u Semantic matchmaking u Aggregate resource reasoning l Task-level reasoning to plan and schedule jobs and resources u More agility and coordination l Wide range of users can specify high level requirements in a mixed- initiative mode u Mapping of high-level requirements to details required for execution l End-to-end resource negotiation and adaptive strategies to accommodate failure