Introduction to Makeflow and Work Queue CSE 40822 – Cloud Computing – Fall 2014 Prof. Douglas Thain.

Slides:



Advertisements
Similar presentations
1 Real-World Barriers to Scaling Up Scientific Applications Douglas Thain University of Notre Dame Trends in HPDC Workshop Vrije University, March 2012.
Advertisements

Lobster: Personalized Opportunistic Computing for CMS at Large Scale Douglas Thain (on behalf of the Lobster team) University of Notre Dame CVMFS Workshop,
EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
Experience with Adopting Clouds at Notre Dame Douglas Thain University of Notre Dame IEEE CloudCom, November 2010.
Introduction to Scalable Programming using Makeflow and Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
A Computation Management Agent for Multi-Institutional Grids
1 Condor Compatible Tools for Data Intensive Computing Douglas Thain University of Notre Dame Condor Week 2011.
1 Opportunities and Dangers in Large Scale Data Intensive Computing Douglas Thain University of Notre Dame Large Scale Data Mining Workshop at SIGKDD August.
1 Models and Frameworks for Data Intensive Cloud Computing Douglas Thain University of Notre Dame IDGA Cloud Computing 8 February 2011.
Reliability and Troubleshooting with Condor Douglas Thain Condor Project University of Wisconsin PPDG Troubleshooting Workshop 12 December 2002.
Introduction to Makeflow Li Yu University of Notre Dame 1.
Undergraduate Poster Presentation Match 31, 2015 Department of CSE, BUET, Dhaka, Bangladesh Wireless Sensor Network Integretion With Cloud Computing H.M.A.
Building Scalable Elastic Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft, Netherlands.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Peter Sempolinski University of Notre Dame.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Building Scalable Applications on the Cloud with Makeflow and Work Queue Douglas Thain and Patrick Donnelly University of Notre Dame Science Cloud Summer.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Design of an Active Storage Cluster File System for DAG Workflows Patrick Donnelly and Douglas Thain University of Notre Dame 2013 November 18 th DISCS-2013.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Toward a Common Model for Highly Concurrent Applications Douglas Thain University of Notre Dame MTAGS Workshop 17 November 2013.
GXP Make Kenjiro Taura. GXP make What is it? –Another parallel & distributed make What is it useful for? –Running jobs with dependencies in parallel –Running.
Contents HADOOP INTRODUCTION AND CONCEPTUAL OVERVIEW TERMINOLOGY QUICK TOUR OF CLOUDERA MANAGER.
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
Introduction to Work Queue Applications CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Building Scalable Scientific Applications with Makeflow Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts University.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame.
The Cooperative Computing Lab  We collaborate with people who have large scale computing problems in science, engineering, and other fields.  We operate.
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Ben Tovar University of Notre Dame October 10, 2013.
1 Computational Abstractions: Strategies for Scaling Up Applications Douglas Thain University of Notre Dame Institute for Computational Economics University.
Introduction to Work Queue Applications Applied Cyberinfrastructure Concepts Course University of Arizona 2 October 2014 Douglas Thain and Nicholas Hazekamp.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Enabling Grids for E-sciencE EGEE-III INFSO-RI Using DIANE for astrophysics applications Ladislav Hluchy, Viet Tran Institute of Informatics Slovak.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
Review of Condor,SGE,LSF,PBS
Introduction to Makeflow and Work Queue Prof. Douglas Thain, University of Notre Dame
U N I V E R S I T Y O F S O U T H F L O R I D A Hadoop Alternative The Hadoop Alternative Larry Moore 1, Zach Fadika 2, Dr. Madhusudhan Govindaraju 2 1.
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
Grid Appliance The World of Virtual Resource Sharing Group # 14 Dhairya Gala Priyank Shah.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Experiments in Utility Computing: Hadoop and Condor Sameer Paranjpye Y! Web Search.
Building Scalable Elastic Applications using Work Queue Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft,
Demonstration of Scalable Scientific Applications Peter Sempolinski and Dinesh Rajan University of Notre Dame.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
Introduction to Makeflow and Work Queue Nicholas Hazekamp and Ben Tovar University of Notre Dame XSEDE 15.
1 An unattended, fault-tolerant approach for the execution of distributed applications Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT Madrid, Spain.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Introduction to Makeflow and Work Queue
Introduction to Distributed Platforms
Scaling Up Scientific Workflows with Makeflow
Introduction to Makeflow and Work Queue
Introduction to Makeflow and Work Queue
湖南大学-信息科学与工程学院-计算机与科学系
Haiyan Meng and Douglas Thain
Introduction to Makeflow and Work Queue
Introduction to Makeflow and Work Queue
Ch 4. The Evolution of Analytic Scalability
Introduction to Makeflow and Work Queue with Containers
What’s New in Work Queue
Creating Custom Work Queue Applications
Big-Data Analytics with Azure HDInsight
Presentation transcript:

Introduction to Makeflow and Work Queue CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain

The Cooperative Computing Lab We collaborate with people who have large scale computing problems in science, engineering, and other fields. We operate computer systems on the O(10,000) cores: clusters, clouds, grids. We conduct computer science research in the context of real people and problems. We develop open source software for large scale distributed computing. 2

Science Depends on Computing! AGTCCGTACGATGCTATTAGCGAGCGTGA…

The Good News: Computing is Plentiful! 4

Big clusters are cheap! 6

7 I have a standard, debugged, trusted application that runs on my laptop. A toy problem completes in one hour. A real problem will take a month (I think.) Can I get a single result faster? Can I get more results in the same time? Last year, I heard about this grid thing. What do I do next? This year, I heard about this cloud thing.

What they want. 8 What they get.

I can get as many machines on the cloud as I want! How do I organize my application to run on those machines?

Our Philosophy: Harness all the resources that are available: desktops, clusters, clouds, and grids. Make it easy to scale up from one desktop to national scale infrastructure. Provide familiar interfaces that make it easy to connect existing apps together. Allow portability across operating systems, storage systems, middleware… Make simple things easy, and complex things possible. No special privileges required.

A Quick Tour of the CCTools Open source, GNU General Public License. Compiles in 1-2 minutes, installs in $HOME. Runs on Linux, Solaris, MacOS, Cygwin, FreeBSD, … Interoperates with many distributed computing systems. – Condor, SGE, Torque, Globus, iRODS, Hadoop… Components: – Makeflow – A portable workflow manager. – Work Queue – A lightweight distributed execution system. – All-Pairs / Wavefront / SAND – Specialized execution engines. – Parrot – A personal user-level virtual file system. – Chirp – A user-level distributed filesystem.

Install in Your Home Directory cd $HOME wget cctools source.tar.gz tar xvzf cctools source.tar.gz cd cctools source./configure make make install

Lots of Documentation

Makeflow: A Portable Workflow System

15 An Old Idea: Makefiles part1 part2 part3: input.data split.py./split.py input.data out1: part1 mysim.exe./mysim.exe part1 >out1 out2: part2 mysim.exe./mysim.exe part2 >out2 out3: part3 mysim.exe./mysim.exe part3 >out3 result: out1 out2 out3 join.py./join.py out1 out2 out3 > result

16 Makeflow = Make + Workflow Makeflow LocalCondorTorque Work Queue Provides portability across batch systems. Enable parallelism (but not too much!) Trickle out work to batch system. Fault tolerance at multiple scales. Data and resource management.

Makeflow Syntax [output files] : [input files] [command to run] sim.exe in.dat calib.dat out.txt sim.exe in.dat –p 50 > out.txt out.txt : in.dat sim.exe –p 50 in.data > out.txt One Rule Not Quite Right! out.txt : in.dat calib.dat sim.exe sim.exe –p 50 in.data > out.txt

You must state all the files needed by the command.

sims.mf out.10 : in.dat calib.dat sim.exe sim.exe –p 10 in.data > out.10 out.20 : in.dat calib.dat sim.exe sim.exe –p 20 in.data > out.20 out.30 : in.dat calib.dat sim.exe sim.exe –p 30 in.data > out.30

How to run a Makeflow Run a workflow locally (multicore?) – makeflow -T local sims.mf Clean up the workflow outputs: – makeflow –c sims.mf Run the workflow on Torque: – makeflow –T torque sims.mf Run the workflow on Condor: – makeflow –T condor sims.mf

Job States 0: WAITING 1: RUNNING 2: COMPLETE 3: FAILED 4: ABORTED Means given to the batch system, which decides when to run it.

Transaction Log # TIME, TASKID, STATE, JOBID, STATE[0], STATE[1], …

Visualization with DOT makeflow_viz –D example.mf > example.dot dot –T gif example.gif DOT and related tools:

Example: Biocompute Portal Generate Makefile Make flow Run Workflow Progress Bar Transaction Log Update Status Condor Pool Submit Tasks BLAST SSAHA SHRIMP EST MAKER …

Makeflow Applications

Makeflow + Work Queue

Private Cluster Campus Condor Pool Public Cloud Provider FutureGrid Torque Cluster Makefile Makeflow Local Files and Programs Makeflow + Batch System makeflow –T torque makeflow –T condor ???

FutureGrid Torque Cluster Campus Condor Pool Public Cloud Provider Private Cluster Makefile Makeflow Local Files and Programs Makeflow + Work Queue W W W ssh WW W W torque_submit_workers W W W condor_submit_workers W W W Thousands of Workers in a Personal Cloud submit tasks

Advantages of Work Queue Harness multiple resources simultaneously. Hold on to cluster nodes to execute multiple tasks rapidly. (ms/task instead of min/task) Scale resources up and down as needed. Better management of data, with local caching for data intensive tasks. Matching of tasks to nodes with data.

Makeflow and Work Queue To start the Makeflow % makeflow –T wq sims.mf Could not create work queue on port % makeflow –T wq –p 0 sims.mf Listening for workers on port 8374… To start one worker: % work_queue_worker studentXX.cse.nd.edu 8374

Start Workers Everywhere Submit workers to Condor: condor_submit_workers studentXX.cse.nd.edu Submit workers to SGE: sge_submit_workers studentXX.cse.nd.edu Submit workers to Torque: torque_submit_workers studentXX.cse.nd.edu

Keeping track of port numbers gets old fast…

Project Names Worker work_queue_worker –N myproject Catalog connect to studentXX:4057 advertise “myproject” is at studentXX:4057 query Makeflow (port 4057) makeflow … –N myproject query work_queue_status

Project Names Start Makeflow with a project name: % makeflow –T wq –N myproject sims.mf Listening for workers on port XYZ… Start one worker: % work_queue_worker -N myproject Start many workers: % torque_submit_workers –N myproject 5

work_queue_status

Resilience and Fault Tolerance MF +WQ is fault tolerant in many different ways: – If Makeflow crashes (or is killed) at any point, it will recover by reading the transaction log and continue where it left off. – Makeflow keeps statistics on both network and task performance, so that excessively bad workers are avoided. – If a worker crashes, the master will detect the failure and restart the task elsewhere. – Workers can be added and removed at any time during the execution of the workflow. – Multiple masters with the same project name can be added and removed while the workers remain. – If the worker sits idle for too long (default 15m) it will exit, so as not to hold resources idle.

Private Cluster Campus Condor Pool Public Cloud Provider Shared SGE Cluster Elastic Application Stack W W W W W W W W WvWv Work Queue Library All-PairsWavefrontMakeflow Custom Apps Thousands of Workers in a Personal Cloud

Makeflow is an example of the Directed Acyclic Graph (or Workflow) model of programming.

Work Queue is an example of the Submit-Wait model of programming. (more on that next time)

The Directed Acyclic Graph model can be implemented using the Submit-Wait model