Building Scalable Applications on the Cloud with Makeflow and Work Queue Douglas Thain and Patrick Donnelly University of Notre Dame Science Cloud Summer.

Slides:



Advertisements
Similar presentations
1 Real-World Barriers to Scaling Up Scientific Applications Douglas Thain University of Notre Dame Trends in HPDC Workshop Vrije University, March 2012.
Advertisements

Lobster: Personalized Opportunistic Computing for CMS at Large Scale Douglas Thain (on behalf of the Lobster team) University of Notre Dame CVMFS Workshop,
 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
BXGrid: A Data Repository and Computing Grid for Biometrics Research Hoang Bui University of Notre Dame 1.
SALSA HPC Group School of Informatics and Computing Indiana University.
Experience with Adopting Clouds at Notre Dame Douglas Thain University of Notre Dame IEEE CloudCom, November 2010.
Introduction to Scalable Programming using Makeflow and Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
A Computation Management Agent for Multi-Institutional Grids
Scaling Up Without Blowing Up Douglas Thain University of Notre Dame.
1 Condor Compatible Tools for Data Intensive Computing Douglas Thain University of Notre Dame Condor Week 2011.
1 High Throughput Scientific Computing with Condor: Computer Science Challenges in Large Scale Parallelism Douglas Thain University of Notre Dame UAB 27.
1 Opportunities and Dangers in Large Scale Data Intensive Computing Douglas Thain University of Notre Dame Large Scale Data Mining Workshop at SIGKDD August.
1 Scaling Up Data Intensive Science with Application Frameworks Douglas Thain University of Notre Dame Michigan State University September 2011.
1 Models and Frameworks for Data Intensive Cloud Computing Douglas Thain University of Notre Dame IDGA Cloud Computing 8 February 2011.
1 Science in the Clouds: History, Challenges, and Opportunities Douglas Thain University of Notre Dame GeoClouds Workshop 17 September 2009.
1 Scaling Up Data Intensive Science to Campus Grids Douglas Thain Clemson University 25 Septmber 2009.
Programming Distributed Systems with High Level Abstractions Douglas Thain University of Notre Dame Cloud Computing and Applications (CCA-08) University.
Deconstructing Clusters for High End Biometric Applications NSF CCF June Douglas Thain and Patrick Flynn University of Notre Dame 5 August.
Work Queue: A Scalable Master/Worker Framework Peter Bui June 29, 2010.
Getting Beyond the Filesystem: New Models for Data Intensive Scientific Computing Douglas Thain University of Notre Dame HEC FSIO Workshop 6 August 2009.
Cooperative Computing for Data Intensive Science Douglas Thain University of Notre Dame NSF Bridges to Engineering 2020 Conference 12 March 2008.
An Introduction to Grid Computing Research at Notre Dame Prof. Douglas Thain University of Notre Dame
Introduction to Makeflow Li Yu University of Notre Dame 1.
Building Scalable Elastic Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft, Netherlands.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Peter Sempolinski University of Notre Dame.
Introduction to Makeflow and Work Queue CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Portable Resource Management for Data Intensive Workflows Douglas Thain University of Notre Dame.
Design of an Active Storage Cluster File System for DAG Workflows Patrick Donnelly and Douglas Thain University of Notre Dame 2013 November 18 th DISCS-2013.
Elastic Applications in the Cloud Dinesh Rajan University of Notre Dame CCL Workshop, June 2012.
Massively Parallel Ensemble Methods Using Work Queue Badi’ Abdul-Wahid Department of Computer Science University of Notre Dame CCL Workshop 2012.
Programming Distributed Systems with High Level Abstractions Douglas Thain University of Notre Dame 23 October 2008.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Toward a Common Model for Highly Concurrent Applications Douglas Thain University of Notre Dame MTAGS Workshop 17 November 2013.
Introduction to Work Queue Applications CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Building Scalable Scientific Applications with Makeflow Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts University.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame.
The Cooperative Computing Lab  We collaborate with people who have large scale computing problems in science, engineering, and other fields.  We operate.
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Ben Tovar University of Notre Dame October 10, 2013.
1 Computational Abstractions: Strategies for Scaling Up Applications Douglas Thain University of Notre Dame Institute for Computational Economics University.
Introduction to Work Queue Applications Applied Cyberinfrastructure Concepts Course University of Arizona 2 October 2014 Douglas Thain and Nicholas Hazekamp.
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
Introduction to Makeflow and Work Queue Prof. Douglas Thain, University of Notre Dame
U N I V E R S I T Y O F S O U T H F L O R I D A Hadoop Alternative The Hadoop Alternative Larry Moore 1, Zach Fadika 2, Dr. Madhusudhan Govindaraju 2 1.
Analyzing LHC Data on 10K Cores with Lobster and Work Queue Douglas Thain (on behalf of the Lobster Team)
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
Building Scalable Elastic Applications using Work Queue Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft,
Demonstration of Scalable Scientific Applications Peter Sempolinski and Dinesh Rajan University of Notre Dame.
1 Christopher Moretti – University of Notre Dame 4/30/2008 High Level Abstractions for Data-Intensive Computing Christopher Moretti, Hoang Bui, Brandon.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Massively Parallel Molecular Dynamics Using Adaptive Weighted Ensemble Badi’ Abdul-Wahid PI: Jesús A. Izaguirre CCL Workshop 2013.
Introduction to Makeflow and Work Queue Nicholas Hazekamp and Ben Tovar University of Notre Dame XSEDE 15.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Instrumenting Badi Abdul-Wahid, RJ Nowling CSE Operating Systems Professor Striegel.
Introduction to Makeflow and Work Queue
Scaling Up Scientific Workflows with Makeflow
Introduction to Makeflow and Work Queue
Integration of Singularity With Makeflow
Introduction to Makeflow and Work Queue
Haiyan Meng and Douglas Thain
Introduction to Makeflow and Work Queue
Weaving Abstractions into Workflows
Introduction to Makeflow and Work Queue
Introduction to Makeflow and Work Queue with Containers
BXGrid: A Data Repository and Computing Grid for Biometrics Research
What’s New in Work Queue
Creating Custom Work Queue Applications
Presentation transcript:

Building Scalable Applications on the Cloud with Makeflow and Work Queue Douglas Thain and Patrick Donnelly University of Notre Dame Science Cloud Summer School Indiana University, August 1st 2012

Go to: Click on “Cloud Summer School Tutorial”

The Cooperative Computing Lab

We collaborate with people who have large scale computing problems in science, engineering, and other fields. We operate computer systems on the O(10,000) cores: clusters, clouds, grids. We conduct computer science research in the context of real people and problems. We develop open source software for large scale distributed computing. 4

Science Depends on Computing! AGTCCGTACGATGCTATTAGCGAGCGTGA…

The Good News: Computing is Plentiful! 6

Big clusters are cheap! 8

9 I have a standard, debugged, trusted application that runs on my laptop. A toy problem completes in one hour. A real problem will take a month (I think.) Can I get a single result faster? Can I get more results in the same time? Last year, I heard about this grid thing. What do I do next? This year, I heard about this cloud thing.

What they want. 10 What they get.

I can get as many machines on the cloud as I want! How do I organize my application to run on those machines?

Our Philosophy: Harness all the resources that are available: desktops, clusters, clouds, and grids. Make it easy to scale up from one desktop to national scale infrastructure. Provide familiar interfaces that make it easy to connect existing apps together. Allow portability across operating systems, storage systems, middleware… Make simple things easy, and complex things possible. No special privileges required.

A Quick Tour of the CCTools Open source, GNU General Public License. Compiles in 1-2 minutes, installs in $HOME. Runs on Linux, Solaris, MacOS, Cygwin, FreeBSD, … Interoperates with many distributed computing systems. – Condor, SGE, Torque, Globus, iRODS, Hadoop… Components: – Makeflow – A portable workflow manager. – Work Queue – A lightweight distributed execution system. – All-Pairs / Wavefront / SAND – Specialized execution engines. – Parrot – A personal user-level virtual file system. – Chirp – A user-level distributed filesystem.

Makeflow: A Portable Workflow System

16 An Old Idea: Makefiles part1 part2 part3: input.data split.py./split.py input.data out1: part1 mysim.exe./mysim.exe part1 >out1 out2: part2 mysim.exe./mysim.exe part2 >out2 out3: part3 mysim.exe./mysim.exe part3 >out3 result: out1 out2 out3 join.py./join.py out1 out2 out3 > result

17 Makeflow = Make + Workflow Makeflow LocalCondorTorque Work Queue Provides portability across batch systems. Enable parallelism (but not too much!) Fault tolerance at multiple scales. Data and resource management.

Makeflow Syntax [output files] : [input files] [command to run] sim.exe in.dat calib.dat out.txt sim.exe in.dat –p 50 > out.txt out.txt : in.dat sim.exe –p 50 in.data > out.txt One Rule Not Quite Right! out.txt : in.dat calib.dat sim.exe sim.exe –p 50 in.data > out.txt

You must state all the files needed by the command.

sims.mf out.10 : in.dat calib.dat sim.exe sim.exe –p 10 in.data > out.10 out.20 : in.dat calib.dat sim.exe sim.exe –p 20 in.data > out.20 out.30 : in.dat calib.dat sim.exe sim.exe –p 30 in.data > out.30

How to run a Makeflow Run a workflow locally (multicore?) – makeflow -T local sims.mf Clean up the workflow outputs: – makeflow –c sims.mf Run the workflow on Torque: – makeflow –T torque sims.mf Run the workflow on Condor: – makeflow –T condor sims.mf

Example: Biocompute Portal Generate Makefile Make flow Run Workflow Progress Bar Transaction Log Update Status Condor Pool Submit Tasks BLAST SSAHA SHRIMP EST MAKER …

Makeflow + Work Queue

Private Cluster Campus Condor Pool Public Cloud Provider FutureGrid Torque Cluster Makefile Makeflow Local Files and Programs Makeflow + Batch System makeflow –T torque makeflow –T condor ???

FutureGrid Torque Cluster Campus Condor Pool Public Cloud Provider Private Cluster Makefile Makeflow Local Files and Programs Makeflow + Work Queue W W W ssh WW W W torque_submit_workers W W W condor_submit_workers W W W Thousands of Workers in a Personal Cloud submit tasks

Advantages of Work Queue Harness multiple resources simultaneously. Hold on to cluster nodes to execute multiple tasks rapidly. (ms/task instead of min/task) Scale resources up and down as needed. Better management of data, with local caching for data intensive tasks. Matching of tasks to nodes with data.

Makeflow and Work Queue To start the Makeflow % makeflow –T wq sims.mf Could not create work_queue on port % makeflow –T wq –p 0 sims.mf Listening for workers on port 8374… To start one worker: % work_queue_worker alamo.futuregrid.org 8374

Start Workers Everywhere! Submit workers to Torque: torque_submit_workers alamo.futuregrid.org Submit workers to Condor: condor_submit_workers alamo.futuregrid.org Submit workers to SGE: sge_submit_workers alamo.futuregrid.org

Keeping track of port numbers gets old fast…

Project Names Worker work_queue_worker -a –N myproject Catalog connect to alamo:4057 advertise “myproject” is at alamo:4057 query Makeflow (port 4057) makeflow … –a –N myproject query work_queue_status

Project Names Start Makeflow with a project name: makeflow –T wq –p 0 –a –N fg-tutorial sims.mf Listening for workers on port XYZ… Start one worker: work_queue_worker -a -N fg-tutorial Start many workers: torque_submit_workers –a –N fg-tutorial 5

work_queue_status

Resilience and Fault Tolerance MF +WQ is fault tolerant in many different ways: – If Makeflow crashes (or is killed) at any point, it will recover by reading the transaction log and continue where it left off. – Makeflow keeps statistics on both network and task performance, so that excessively bad workers are avoided. – If a worker crashes, the master will detect the failure and restart the task elsewhere. – Workers can be added and removed at any time during the execution of the workflow. – Multiple masters with the same project name can be added and removed while the workers remain. – If the worker sits idle for too long (default 15m) it will exit, so as not to hold resources idle.

Managing Your Workforce Torque Cluster Master A Master B Master C Condor Pool W W W W W W Submits new workers. Restarts failed workers. Removes unneeded workers. WQ Pool 200 work_queue_pool –T condor WQ Pool 200 work_queue_pool –T torque 500 W W W 0

What if my program is dynamic? Write a program to use the Work Queue API.

36 Work Queue API #include “work_queue.h” while( not done ) { while (more work ready) { task = work_queue_task_create(); // add some details to the task work_queue_submit(queue, task); } task = work_queue_wait(queue); // process the completed task }

37 worker P In.txtout.txt put P.exe put in.txt exec P.exe out.txt get out.txt 1000s of workers dispatched to clusters, clouds, and grids Work Queue System Work Queue Library Work Queue Program C / Python / Perl cache of recently used files

Run One Task in C #include “work_queue.h” struct work_queue *queue; struct work_queue_task *task; queue = work_queue_create( 0 ); work_queue_specify_name( “myproject” ); task = work_queue_task_create(“sim.exe –p 50 in.dat >out.txt”); /// Missing: Specify files needed by the task. work_queue_submit( queue, task ); while(!work_queue_empty(queue)) { task = work_queue_wait( queue, 60 ); if(task) work_queue_task_delete( task ); }

Run One Task in Perl use work_queue; $queue = work_queue_create( 0 ); work_queue_specify_name( “myproject” ); $task = work_queue_task_create(“sim.exe –p 50 in.dat >out.txt”); ### Missing: Specify files needed by the task. work_queue_submit( $queue, $task ); while(!work_queue_empty($queue)) { $task = work_queue_wait( $queue, 60 ); if($task) work_queue_task_delete( $task ); }

Run One Task in Python from work_queue import *; queue = WorkQueue( name=“myproject”, port = 0 ) task = Task(“sim.exe –p 50 in.dat >out.txt”) ### Missing: Specify files needed by the task. queue.submit( task ); While not queue.empty(): task = queue.wait(60);

Specify Files for a Task work_queue_specify_file( task,“in.dat”,”in.dat”, WORK_QUEUE_INPUT, WORK_QUEUE_NOCACHE ); work_queue_specify_file( task,“out.dat”,”out.dat”, WORK_QUEUE_OUTPUT, WORK_QUEUE_NOCACHE ); work_queue_specify_file( task,“calib.dat”,”calib.dat”, WORK_QUEUE_INPUT, WORK_QUEUE_CACHE ); work_queue_specify_file( task,“sim.exe”,”sim.exe”, WORK_QUEUE_INPUT, WORK_QUEUE_CACHE ); sim.exe in.dat calib.dat out.txt sim.exe in.dat –p 50 > out.txt

You must state all the files needed by the command.

Sample Applications of Work Queue

Genome Assembly Christopher Moretti, Andrew Thrasher, Li Yu, Michael Olson, Scott Emrich, and Douglas Thain, A Framework for Scalable Genome Assembly on Clusters, Clouds, and Grids, IEEE Transactions on Parallel and Distributed Systems, 2012 Using WQ, we could assemble a human genome in 2.5 hours on a collection of clusters, clouds, and grids with a speedup of 952X. SAND filter master SAND align master Celera Consensus W W W W W W W Sequence Data Modified Celera Assembler

Replica Exchange T=10KT=20K T=30KT=40K Replica Exchange Work Queue Simplified Algorithm: Submit N short simulations at different temps. Wait for all to complete. Select two simulations to swap. Continue all of the simulations. Dinesh Rajan, Anthony Canino, Jesus A Izaguirre, and Douglas Thain, Converting A High Performance Application to an Elastic Cloud Application, Cloud Com 2011/

Adaptive Weighted Ensemble 46 Proteins fold into a number of distinctive states, each of which affects its function in the organism. How common is each state? How does the protein transition between states? How common are those transitions?

47 Simplified Algorithm: –Submit N short simulations in various states. –Wait for them to finish. –When done, record all state transitions. –If too many are in one state, redistribute them. –Has enough data been collected? –Yes – Stop –No – Go back to the beginning. AWE Using Work Queue

AWE on Clusters, Clouds, and Grids 48

New Pathway Found! 49 Credit: Joint work in progress with Badi Abdul-Wahid, Dinesh Rajan, Haoyun Feng, Jesus Izaguirre, and Eric Darve.

Other parts of the CCTools…

51 Parrot Virtual File System LocalHTTPCVMFSChirpiRODS Ordinary Appl Filesystem Interface: open/read/write/close Web Servers iRODS Server CVMFS Network Chirp Server Parrot Virtual File System

All-Pairs and Wavefront B1 B2 B3 A1A2A3 FFF F FF FF F M[4,2] M[3,2]M[4,3] M[4,4]M[3,4]M[2,4] M[4,0]M[3,0]M[2,0]M[1,0]M[0,0] M[0,1] M[0,2] M[0,3] M[0,4] F x yd F x yd F x yd F x yd F x yd F x yd F F y y x x d d x FF x ydyd allpairs A.list B.list F.exe wavefront M.data F.exe Specialized tools for very specific workflow patterns.

Private Cluster Campus Condor Pool Public Cloud Provider Shared SGE Cluster Elastic Application Stack W W W W W W W W WvWv Work Queue Library All-PairsWavefrontMakeflow Custom Apps Thousands of Workers in a Personal Cloud

Computer Science Challenges in Scalable Programs Capacity Management – Just because I can run on 1000 cores doesn’t mean it makes sense! What size gets the best perf, cost, etc? Software Engineering Tools – A linker to gather up all the dependencies, a loader to move an entire workflow to the cloud, profiler, debugger… Composability – Program A (500 cores) calls program B (800 cores ) and program C (1000 cores). But, I only have 600 cores. What happens next? Portability and Reproducibility – This program ran correctly last year on Blue Sock Linux 35.2, but this year I have Green Scarf Linux 85. Will it work? Effortless Scalability – Some component always has an arbitrary limit that breaks as you scale up (kernel, network, filesystem). How do we discover, avoid, and repair all of these issues?

Research is a Team Sport Faculty Collaborators: Patrick Flynn (ND) Scott Emrich (ND) Jesus Izaguirre (ND) Eric Darve (Stanford) Vijay Pande (Stanford) Sekou Remy (Clemson) 55 Current Graduate Students: Michael Albrecht Michael Albrecht Patrick Donnelly Patrick Donnelly Dinesh Rajan Dinesh Rajan Peter Sempolinski Peter Sempolinski Li Yu Li Yu Recent CCL PhDs Peter Bui (UWEC) Peter Bui (UWEC) Hoang Bui (Rutgers) Hoang Bui (Rutgers) Chris Moretti (Princeton) Chris Moretti (Princeton) Summer REU Students: Chris Bauschka Chris Bauschka Iheanyi Ekechuku Iheanyi Ekechuku Joe Fetsch Joe Fetsch

Papers, Software, Manuals, … 56 This work was supported by NSF Grants CCF , CNS , and CNS

Next Steps: Tutorial: Basic examples of how to use Makeflow and Work Queue on Future Grid. Homework: Problems to work on this week when you have some free time. Later: Get CCTools running on systems at your home institutions.

Form groups of three! (or FutureGrid will melt!)

Go to: Click on “Cloud Summer School Tutorial”