Introduction to Scalable Programming using Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.

Slides:



Advertisements
Similar presentations
1 Real-World Barriers to Scaling Up Scientific Applications Douglas Thain University of Notre Dame Trends in HPDC Workshop Vrije University, March 2012.
Advertisements

Natasha Pavlovikj, Kevin Begcy, Sairam Behera, Malachy Campbell, Harkamal Walia, Jitender S.Deogun University of Nebraska-Lincoln Evaluating Distributed.
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
SALSA HPC Group School of Informatics and Computing Indiana University.
Experience with Adopting Clouds at Notre Dame Douglas Thain University of Notre Dame IEEE CloudCom, November 2010.
DV/dt : Accelerating the Rate of Progress Towards Extreme Scale Collaborative Science Miron Livny (UW), Ewa Deelman (USC/ISI), Douglas Thain (ND), Frank.
Introduction to Scalable Programming using Makeflow and Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012.
Scaling Up Without Blowing Up Douglas Thain University of Notre Dame.
1 Condor Compatible Tools for Data Intensive Computing Douglas Thain University of Notre Dame Condor Week 2011.
1 Scaling Up Data Intensive Science with Application Frameworks Douglas Thain University of Notre Dame Michigan State University September 2011.
Distributed Computations
Work Queue: A Scalable Master/Worker Framework Peter Bui June 29, 2010.
Introduction to Makeflow Li Yu University of Notre Dame 1.
Cloud based Dynamic workflow with QOS for Mass Spectrometry Data Analysis Thesis Defense: Ashish Nagavaram Graduate student Computer Science and Engineering.
Building Scalable Elastic Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft, Netherlands.
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Peter Sempolinski University of Notre Dame.
Intermediate HTCondor: Workflows Monday pm Greg Thain Center For High Throughput Computing University of Wisconsin-Madison.
Building Scalable Applications on the Cloud with Makeflow and Work Queue Douglas Thain and Patrick Donnelly University of Notre Dame Science Cloud Summer.
Introduction to Makeflow and Work Queue CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
Pregel: A System for Large-Scale Graph Processing
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Elastic Applications in the Cloud Dinesh Rajan University of Notre Dame CCL Workshop, June 2012.
Software Architecture
Massively Parallel Ensemble Methods Using Work Queue Badi’ Abdul-Wahid Department of Computer Science University of Notre Dame CCL Workshop 2012.
Toward a Common Model for Highly Concurrent Applications Douglas Thain University of Notre Dame MTAGS Workshop 17 November 2013.
Introduction to Hadoop and HDFS
Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 作者 :Rutvik Karve , Devendra Dahiphale , Amit Chhajer 報告 : 饒展榕.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
Introduction to Work Queue Applications CSE – Cloud Computing – Fall 2014 Prof. Douglas Thain.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Building Scalable Scientific Applications with Makeflow Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts University.
Experiences with a HTCondor pool: Prepare to be underwhelmed C. J. Lingwood, Lancaster University CCB (The Condor Connection Broker) – Dan Bradley
Building Scalable Scientific Applications using Makeflow Dinesh Rajan and Douglas Thain University of Notre Dame.
The Cooperative Computing Lab  We collaborate with people who have large scale computing problems in science, engineering, and other fields.  We operate.
Introduction to Scalable Programming using Work Queue Dinesh Rajan and Ben Tovar University of Notre Dame October 10, 2013.
1 Computational Abstractions: Strategies for Scaling Up Applications Douglas Thain University of Notre Dame Institute for Computational Economics University.
Introduction to Work Queue Applications Applied Cyberinfrastructure Concepts Course University of Arizona 2 October 2014 Douglas Thain and Nicholas Hazekamp.
Enabling Grids for E-sciencE EGEE-III INFSO-RI Using DIANE for astrophysics applications Ladislav Hluchy, Viet Tran Institute of Informatics Slovak.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
Intermediate Condor: Workflows Rob Quick Open Science Grid Indiana University.
Review of Condor,SGE,LSF,PBS
Introduction to Makeflow and Work Queue Prof. Douglas Thain, University of Notre Dame
Analyzing LHC Data on 10K Cores with Lobster and Work Queue Douglas Thain (on behalf of the Lobster Team)
Intermediate Condor: Workflows Monday, 1:15pm Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Building Scalable Elastic Applications using Work Queue Dinesh Rajan and Douglas Thain University of Notre Dame Tutorial at CCGrid, May Delft,
Demonstration of Scalable Scientific Applications Peter Sempolinski and Dinesh Rajan University of Notre Dame.
Building Scalable Scientific Applications with Work Queue Douglas Thain and Dinesh Rajan University of Notre Dame Applied Cyber Infrastructure Concepts.
Massively Parallel Molecular Dynamics Using Adaptive Weighted Ensemble Badi’ Abdul-Wahid PI: Jesús A. Izaguirre CCL Workshop 2013.
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Introduction to Makeflow and Work Queue Nicholas Hazekamp and Ben Tovar University of Notre Dame XSEDE 15.
1 An unattended, fault-tolerant approach for the execution of distributed applications Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT Madrid, Spain.
Introduction to Makeflow and Work Queue
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Scaling Up Scientific Workflows with Makeflow
Introduction to Makeflow and Work Queue
Integration of Singularity With Makeflow
Introduction to Makeflow and Work Queue
Haiyan Meng and Douglas Thain
Introduction to Makeflow and Work Queue
Weaving Abstractions into Workflows
Introduction to Makeflow and Work Queue
CS110: Discussion about Spark
Introduction to Makeflow and Work Queue with Containers
What’s New in Work Queue
Creating Custom Work Queue Applications
Presentation transcript:

Introduction to Scalable Programming using Work Queue Dinesh Rajan and Mike Albrecht University of Notre Dame October 24 and November 7, 2012

Makeflow Great for static workflows! - Tasks/dependencies are known beforehand - Directed Acyclic Graphs Great for file-based workflows! -Input: files, Output: files What if my workflow isn’t any of the above?

Use Work Queue! API available for: Python C Perl

Protein Folding 5 Proteins fold into a number of distinctive states, each of which affects its function in the organism. How common is each state? How does the protein transition between states? How common are those transitions?

6 Accelerated Weighted Ensemble (AWE) Create N simulations Run Simulations Aggregate output of simulations Compute distribution of state transitions Redistribute states if too many in one state Enough data collected? No YES Stop Yes

7 ~ 1 million simulations ~ 1 million simulations Large scale of resources needed Large scale of resources needed Continuously run simulations Continuously run simulations Lots of failures happen at large scales Lots of failures happen at large scales Manage resources Manage resources - Add and remove resources as needed - Avoid wastage of resources Challenges

8 Used Work Queue Used Work Queue Aggregate resources from Aggregate resources from ND CRC, Condor, Amazon EC2, Microsoft Azure Use resources as they become available Use resources as they become available Handle failures and errors Handle failures and errors Efficiently use allocated resources Efficiently use allocated resources Used ~2500 CPU/GPUs over several days! Used ~2500 CPU/GPUs over several days! How did we do it?

AWE on Clusters, Clouds, and Grids 9

New Pathway Found! 10 Joint work with computational biologists: Folding Proteins at 500 ns/hour with Work Queue, eScience 2012

Genome Assembly A Framework for Scalable Genome Assembly on Clusters, Clouds, and Grids, IEEE Transactions on Parallel and Distributed Systems, 2012 Using Work Queue, we assembled a human genome in 2.5 hours on a collection of clusters, clouds, and grids with a speedup of 952x. SAND filter master SAND align master Celera Consensus W W W W W W W Sequence Data Modified Celera Assembler

12 worker word- compare in.txtout.txt put word-compare.py put in.txt exec word-compare in.txt > out.txt get out.txt 1000s of workers dispatched to clusters, clouds, & grids Work Queue System Work Queue Library Work Queue Master Program (C / Python / Perl) cache recently used files

13 Work Queue API Outline use work_queue; queue = work_queue_create(); while( work to be done ) { task = work_queue_task_create(); // specify details for the task work_queue_submit(queue, task); } while ( tasks in queue ) { task = work_queue_wait(queue); // process the completed task } for ( all tasks ) { T = create_task(); specify_task(T); submit_task(T); } while ( not finished ) { T = wait_for_task(); //process T’s output }

Run One Task in Python from work_queue import * queue = WorkQueue( port = 0 ) task = Task(“python word-compare.py in.txt > out.txt”) ### Missing: Specify files needed by the task. queue.submit( task ) While not queue.empty(): task = queue.wait(60)

Run One Task in Perl use work_queue; $queue = work_queue_create( 0 ); $task = work_queue_task_create(“python word-compare.py in.txt > out.txt”); ### Missing: Specify files needed by the task. work_queue_submit( $queue, $task ); while(!work_queue_empty($queue)) { $task = work_queue_wait( $queue, 60 ); if($task) work_queue_task_delete( $task ); }

Run One Task in C #include “work_queue.h” struct work_queue *queue; struct work_queue_task *task; queue = work_queue_create( 0 ); task = work_queue_task_create(“python word-compare.py in.txt > out.txt”); /// Missing: Specify files needed by the task. work_queue_submit( queue, task ); while(!work_queue_empty(queue)) { task = work_queue_wait( queue, 60 ); if(task) work_queue_task_delete( task ); }

Python: Specify Files for a Task task.specify_file( “in.txt”, ”in.txt”, WORK_QUEUE_INPUT, cache = False ) task.specify_file( “out.txt”, ”out.txt”, WORK_QUEUE_OUTPUT, cache = False ) task.specify_file( “word-compare.py”, ”word-compare.py”, WORK_QUEUE_INPUT, cache = True ) word- compare. py out.txt word-compare.py in.txt > out.txt in.txt

Perl: Specify Files for a Task work_queue_task_specify_file( $task,“in.txt”,”in.txt”, $WORK_QUEUE_INPUT, $WORK_QUEUE_NOCACHE ); work_queue_task_specify_file( $task,“out.txt”,”out.txt”, $WORK_QUEUE_OUTPUT, $WORK_QUEUE_NOCACHE ); work_queue_task_specify_file( $task,“word-compare.py”,”word-compare.py”, $WORK_QUEUE_INPUT, $WORK_QUEUE_CACHE ); word- compare. py out.txt word-compare.py in.txt > out.txt in.txt

C: Specify Files for a Task work_queue_task_specify_file( task,“in.dat”,”in.dat”, WORK_QUEUE_INPUT, WORK_QUEUE_NOCACHE ); work_queue_task_specify_file( task,“out.txt”,”out.txt”, WORK_QUEUE_OUTPUT, WORK_QUEUE_NOCACHE ); work_queue_task_specify_file( task,“word-compare.py”,”word-compare.py”, WORK_QUEUE_INPUT, WORK_QUEUE_CACHE ); word- compare. py out.txt word-compare.py in.txt > out.txt in.txt

You must state all the files needed by the command.

Start workers for your Work Queue program Start one local worker: work_queue_worker $MASTERHOST $MASTERPORT Submit (10) workers to SGE: sge_submit_workers $MASTERHOST $MASTERPORT 10 Submit (10) workers to Condor: condor_submit_workers $MASTERHOST $MASTERPORT 10 $MASTERHOST = Name of machine where Work Queue program runs. (e.g., opteron.crc.nd.edu) $MASTERPORT = Port on which Work Queue program is listening. (e.g., 1024)

Run Makeflow with Work Queue Start Makeflow with the Work Queue driver: makeflow –T wq This runs Makeflow as a Work Queue master Start Work Queue workers for your Makeflow work_queue_worker $MASTERHOST $MASTERPORT sge_submit_workers $MASTERHOST $MASTERPORT 10 $MASTERHOST = Name of machine where Work Queue program runs. (e.g., opteron.crc.nd.edu) $MASTERPORT = Port on which Work Queue program is listening. (e.g., 1024)

Advantages of Using Work Queue Harness multiple resources simultaneously. Hold on to cluster nodes to execute multiple tasks rapidly. (ms/task instead of min/task) Scale resources up and down as needed. Better management of data, with local caching for data intensive tasks. Matching of tasks to nodes with data.

Work Queue Documentation

Next Steps 1.Tutorial: Basic examples of how to write and use Work Queue. 2.Practice problems to try on completion of tutorial.

Scalable Programming using Makeflow and Work Queue We learn how to express parallelism in workflows as tasks. (tutorial) We learn how to specify tasks & their dependencies (inputs, outputs, etc) for their execution. (tutorial) We learn how to run these tasks using multiple resources. (tutorial/practice problems) We apply these techniques to achieve scalability in running large workflows using hundreds to thousands of CPUs. (practice problems/your research)