Introduction to Makeflow and Work Queue

Slides:



Advertisements
Similar presentations
Current methods for negotiating firewalls for the Condor ® system Bruce Beckles (University of Cambridge Computing Service) Se-Chang Son (University of.
Advertisements

Building a secure Condor ® pool in an open academic environment Bruce Beckles University of Cambridge Computing Service.
Experience with Adopting Clouds at Notre Dame Douglas Thain University of Notre Dame IEEE CloudCom, November 2010.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Introduction to Scalable Programming using Makeflow and Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
1 Condor Compatible Tools for Data Intensive Computing Douglas Thain University of Notre Dame Condor Week 2011.
Introduction to Makeflow Li Yu University of Notre Dame 1.
Building Scalable Elastic Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft, Netherlands.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Peter Sempolinski University of Notre Dame.
Building Scalable Applications on the Cloud with Makeflow and Work Queue Douglas Thain and Patrick Donnelly University of Notre Dame Science Cloud Summer.
Introduction to Makeflow and Work Queue CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Introduction to Hadoop and HDFS
Introduction to Work Queue Applications CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
Module 9: Preparing to Administer a Server. Overview Introduction to Administering a Server Configuring Remote Desktop to Administer a Server Managing.
Building Scalable Scientific Applications with Makeflow Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts University.
BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame.
The Cooperative Computing Lab  We collaborate with people who have large scale computing problems in science, engineering, and other fields.  We operate.
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Ben Tovar University of Notre Dame October 10, 2013.
Grid job submission using HTCondor Andrew Lahiff.
Distributed Framework for Automatic Facial Mark Detection Graduate Operating Systems-CSE60641 Nisha Srinivas and Tao Xu Department of Computer Science.
Introduction to Work Queue Applications Applied Cyberinfrastructure Concepts Course University of Arizona 2 October 2014 Douglas Thain and Nicholas Hazekamp.
Enabling Grids for E-sciencE EGEE-III INFSO-RI Using DIANE for astrophysics applications Ladislav Hluchy, Viet Tran Institute of Informatics Slovak.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
Intermediate Condor: Workflows Rob Quick Open Science Grid Indiana University.
Review of Condor,SGE,LSF,PBS
Introduction to Makeflow and Work Queue Prof. Douglas Thain, University of Notre Dame
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Dan Bradley Condor Project CS and Physics Departments University of Wisconsin-Madison CCB The Condor Connection Broker.
{ Tanya Chaturvedi MBA(ISM) Hadoop is a software framework for distributed processing of large datasets across large clusters of computers.
Building Scalable Elastic Applications using Work Queue Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft,
Demonstration of Scalable Scientific Applications Peter Sempolinski and Dinesh Rajan University of Notre Dame.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
HTCondor’s Grid Universe Jaime Frey Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Gabi Kliot Computer Sciences Department Technion – Israel Institute of Technology Adding High Availability to Condor Central Manager Adding High Availability.
Introduction to Makeflow and Work Queue Nicholas Hazekamp and Ben Tovar University of Notre Dame XSEDE 15.
1 An unattended, fault-tolerant approach for the execution of distributed applications Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT Madrid, Spain.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Module 9: Preparing to Administer a Server
Condor DAGMan: Managing Job Dependencies with Condor
Introduction to Makeflow and Work Queue
OpenPBS – Distributed Workload Management System
Hadoop Aakash Kag What Why How 1.
Scaling Up Scientific Workflows with Makeflow
CommLab PC Cluster (Ubuntu OS version)
Introduction to Makeflow and Work Queue
Building Grids with Condor
Integration of Singularity With Makeflow
Introduction to Makeflow and Work Queue
Haiyan Meng and Douglas Thain
Introduction to Makeflow and Work Queue
Introduction to Makeflow and Work Queue with Containers
Time Gathering Systems Secure Data Collection for IBM System i Server
What’s New in Work Queue
Creating Custom Work Queue Applications
Mats Rynge USC Information Sciences Institute
GRID Workload Management System for CMS fall production
Module 9: Preparing to Administer a Server
Condor-G Making Condor Grid Enabled
Job Submission Via File Transfer
PU. Setting up parallel universe in your pool and when (not
Presentation transcript:

Introduction to Makeflow and Work Queue Nate Kremer-Herman CCL Workshop on Scalable Computing

Makeflow: A Portable Workflow System

An Old Idea: Makefiles part1 part2 part3: input.data split.py ./split.py input.data out1: part1 mysim.exe ./mysim.exe part1 >out1 out2: part2 mysim.exe ./mysim.exe part2 >out2 out3: part3 mysim.exe ./mysim.exe part3 >out3 result: out1 out2 out3 join.py ./join.py out1 out2 out3 > result

Makeflow = Make + Workflow Provides portability across batch systems. Enable parallelism (but not too much!) Trickle out work to batch system. Fault tolerance at multiple scales. Data and resource management. Makeflow Local SLURM TORQUE Work Queue

sim.exe in.dat –p 50 > out.txt Makeflow Syntax [output files] : [input files] [command to run] One rule sim.exe in.dat calib.dat out.txt sim.exe in.dat –p 50 > out.txt out.txt : in.dat calib.dat sim.exe sim.exe –p 50 in.data > out.txt out.txt : in.dat sim.exe –p 50 in.data > out.txt Not quite right!

You must state all the files needed by the command.

example.makeflow out.10 : in.dat calib.dat sim.exe sim.exe –p 10 in.data > out.10 out.20 : in.dat calib.dat sim.exe sim.exe –p 20 in.data > out.20 out.30 : in.dat calib.dat sim.exe sim.exe –p 30 in.data > out.30

Running Makeflow •Run a workflow locally: % makeflow example.makeflow •Clean up the workflow outputs: % makeflow –c example.makeflow •Run the workflow on Torque: % makeflow –T torque example.makeflow •Run the workflow on Condor: % makeflow –T condor example.makeflow

Makeflow + Work Queue

Local Files and Programs Makeflow + Batch System XSEDE Torque Cluster Private Cluster Makefile makeflow –T torque ??? Makeflow ??? Campus Condor Pool Public Cloud Provider makeflow –T condor Local Files and Programs

Makeflow + Work Queue torque_submit_workers W XSEDE Torque Cluster Private Cluster Makefile W Thousands of Workers in a Personal Cloud submit tasks Makeflow ssh W Campus Condor Pool Public Cloud Provider condor_submit_workers W Local Files and Programs

Advantages of Work Queue Harness multiple resources simultaneously. Pilot jobs (Work Queue workers) hold on to cluster nodes to execute multiple tasks rapidly. Scale resources up and down as needed. Better management of data, with local caching for data intensive tasks. Matching of tasks to nodes with data.

Running Makeflow with Work Queue To start the Makeflow: % makeflow –T wq example.makeflow Could not create work queue on port 9123. % makeflow –T wq –p 0 example.makeflow Listening for workers on port 8374… To start one worker: % work_queue_worker master.hostname.org 8374

Project Names makeflow … –N myproject work_queue_worker –N myproject (port 9050) makeflow … –N myproject Worker work_queue_worker –N myproject connect to workflow.iu:9050 advertise query Catalog query work_queue_status “myproject” is at workflow.iu:9050

Using Project Names To start Makeflow with a project name: % makeflow –T wq –N myproject example.makeflow Listening for workers on port XYZ… Start one worker: % work_queue_worker -N myproject Start many workers: % work_queue_factory -T condor -N myproject

work_queue_status

Work Queue Visualization Dashboard ccl.cse.nd.edu/software/workqueue/status

Work Queue Factory No need to manually submit workers. Scales to your workflow automatically (whether up or down). One-stop worker submission tool. Supports manual configuration of minimum and maximum workers to maintain if you know your worker needs. Preferred method of worker submission going forward due to its simplicity.

Resilience and Fault Tolerance MF +WQ is fault tolerant in many different ways: If Makeflow crashes (or is killed) at any point, it will recover by reading the transaction log and continue where it left off. Makeflow keeps statistics on both network and task performance, so that excessively bad workers are avoided. If a worker crashes, the master will detect the failure and restart the task elsewhere. Workers can be added and removed at any time during the execution of the workflow. Multiple masters with the same project name can be added and removed while the workers remain. If the worker sits idle for too long (default 15m) it will exit, so it does not hold resources while idle.

Visit our website: ccl.cse.nd.edu Follow us on Twitter: @ProfThain Check out our blog: cclnd.blogspot.com