July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.

Slides:



Advertisements
Similar presentations
Jaime Frey, Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison OGF.
Advertisements

Jaime Frey Computer Sciences Department University of Wisconsin-Madison OGF 19 Condor Software Forum Routing.
WS-JDML: A Web Service Interface for Job Submission and Monitoring Stephen M C Gough William Lee London e-Science Centre Department of Computing, Imperial.
NorduGrid Grid Manager developed at NorduGrid project.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
Part 7: CondorG A: Condor-G B: Laboratory: CondorG.
June 21-25, 2004Lecture2: Grid Job Management1 Lecture 3 Grid Resources and Job Management Jaime Frey Condor Project, University of Wisconsin-Madison
1 Concepts of Condor and Condor-G Guy Warner. 2 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!
Condor-G: A Computation Management Agent for Multi-Institutional Grids James Frey, Todd Tannenbaum, Miron Livny, Ian Foster, Steven Tuecke Reporter: Fu-Jiun.
A Computation Management Agent for Multi-Institutional Grids
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison Condor-G: A Case in Distributed.
WP 1 Grid Workload Management Massimo Sgaravatto INFN Padova.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
First steps implementing a High Throughput workload management system Massimo Sgaravatto INFN Padova
Evaluation of the Globus GRAM Service Massimo Sgaravatto INFN Padova.
Resource Management Reading: “A Resource Management Architecture for Metacomputing Systems”
Zach Miller Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
6d.1 Schedulers and Resource Brokers ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
The Glidein Service Gideon Juve What are glideins? A technique for creating temporary, user- controlled Condor pools using resources from.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor-G and DAGMan.
Job Submission Condor, Globus, Java CoG Kit Young Suk Moon.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Grid Computing I CONDOR.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Condor Birdbath Web Service interface to Condor
3-2.1 Topics Grid Computing Meta-schedulers –Condor-G –Gridway Distributed Resource Management Application (DRMAA) © 2010 B. Wilkinson/Clayton Ferner.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Part 6: (Local) Condor A: What is Condor? B: Using (Local) Condor C: Laboratory: Condor.
Intermediate Condor Rob Quick Open Science Grid HTC - Indiana University.
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor-G Operations.
Grid Workload Management Massimo Sgaravatto INFN Padova.
Grid job submission using HTCondor Andrew Lahiff.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
Grid Compute Resources and Job Management. 2 Local Resource Managers (LRM)‏ Compute resources have a local resource manager (LRM) that controls:  Who.
Grid Compute Resources and Job Management New Mexico Grid School – April 9, 2009 Marco Mambelli – University of Chicago
Report from USA Massimo Sgaravatto INFN Padova. Introduction Workload management system for productions Monte Carlo productions, data reconstructions.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Part Five: Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Review of Condor,SGE,LSF,PBS
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Pilot Factory using Schedd Glidein Barnett Chiu BNL
Grid Compute Resources and Job Management. 2 How do we access the grid ?  Command line with tools that you'll use  Specialised applications Ex: Write.
Job Submission with Globus, Condor, and Condor-G Selim Kalayci Florida International University 07/21/2009 Note: Slides are compiled from various TeraGrid.
Grid Compute Resources and Job Management. 2 Job and compute resource management This module is about running jobs on remote compute resources.
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison What’s New in Condor-G.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
JSS Job Submission Service Massimo Sgaravatto INFN Padova.
Grid Workload Management (WP 1) Massimo Sgaravatto INFN Padova.
HTCondor’s Grid Universe Jaime Frey Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison.
First evaluation of the Globus GRAM service Massimo Sgaravatto INFN Padova.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor-G: Condor and Grid Computing.
Workload Management Workpackage
Peter Kacsuk – Sipos Gergely MTA SZTAKI
Grid Compute Resources and Job Management
Building Grids with Condor
Globus Job Management. Globus Job Management Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Basic Grid Projects – Condor (Part I)
Condor-G Making Condor Grid Enabled
Condor-G: An Update.
Presentation transcript:

July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management

July 11-15, 2005 Lecture3: Grid Job Management 2 Job and compute resource management This module is about running jobs on remote compute resources

July 11-15, 2005 Lecture3: Grid Job Management 3 Job and resource management Compute resources have a local resource manager  This controls who is allowed to run jobs and how they run, on a resource GRAM  Helps us run a job on a remote resource Condor  Manages jobs

July 11-15, 2005 Lecture3: Grid Job Management 4 Local Resource Managers Local Resource Managers (LRMs) – software on a compute resource such a multi-node cluster. Control which jobs run, when they run and on which processor they run Example policies:  Each cluster node can run one job. If there are more jobs, then the other jobs must wait in a queue  Reservations – maybe some nodes in cluster reserved for a specific person eg. PBS, LSF, Condor

Job Management on a Grid User The Grid Condor PBS LSF fork GRAM Site A Site B Site C Site D

July 11-15, 2005 Lecture3: Grid Job Management 6 GRAM Globus Resource Allocation Manager Provides a standardised interface to submit jobs to different types of LRM Clients submit a job request to GRAM GRAM translates into something the LRM can understand Same job request can be used for many different kinds of LRM

GRAM Given a job specification:  Create an environment for a job  Stage files to and from the environment  Submit a job to a local resource manager  Monitor a job  Send notifications of the job state change  Stream a job’s stdout/err during execution

Two versions of GRAM There are two versions of GRAM  GRAM2 Own protocols Older More widely used No longer actively developed  GRAM4 Web services Newer New features go into GRAM4 In this module, will be using GRAM2

GRAM components Clients – eg. Globus-job-submit, globusrun Gatekeeper  Server  Accepts job submissions  Handles security Jobmanager  Knows how to send a job into the local resource manager  Different job managers for different LRMs

GRAM components Worker nodes / CPUsWorker node / CPU LRM eg Condor, PBS, LSF Gatekeeper Internet Jobmanag er globus job run Submitting machine eg. User's workstation

July 11-15, 2005 Lecture3: Grid Job Management 11 Submitting a job with GRAM Globus-job-run command globus-job-run rookery.uchicago.edu /bin/hostname rook11 Run '/bin/hostname' on the resource rookery.uchicago.edu We don't care what LRM is used on 'rookery'. This command works with any LRM.

The client can describe the job with GRAM’s Resource Specification Language (RSL) Example: &(executable = a.out) (directory = /home/nobody ) (arguments = arg1 "arg 2") Submit with: globusrun -f spec.rsl -r rookery.uchicago.edu

Use other programs to generate RSL RSL job descriptions can become very complicated We can use other programs to generate RSL for us Example: Condor-G – next section

July 11-15, 2005 Lecture3: Grid Job Management 14 Condor Globus-job-run submits jobs, but...  No job tracking: what happens when something goes wrong? Condor:  Many features, but in this module:  Condor-G for reliable job management

July 11-15, 2005 Lecture3: Grid Job Management 15 Condor can manage a large number of jobs Managing a large number of jobs  You specify the jobs in a file and submit them to Condor, which runs them all and keeps you notified on their progress  Mechanisms to help you manage huge numbers of jobs (1000’s), all the data, etc.  Condor can handle inter-job dependencies (DAGMan)  Condor users can set job priorities  Condor administrators can set user priorities Can do this as:  a local resource manager on a compute resource  a grid client submitting to GRAM (Condor-G)

July 11-15, 2005 Lecture3: Grid Job Management 16 Condor can manage compute resource Dedicated Resources  Compute Clusters Non-dedicated Resources  Desktop workstations in offices and labs Often idle 70% of time Condor acts as a Local Resource Manager

July 11-15, 2005 Lecture3: Grid Job Management 17 … and Condor Can Manage Grid jobs Condor-G is a specialization of Condor. It is also known as the “Grid universe”. Condor-G can submit jobs to Globus resources, just like globus-job-run. Condor-G benefits from Condor features, like a job queue.

July 11-15, 2005 Lecture3: Grid Job Management 18 Some Grid Challenges Condor-G does whatever it takes to run your jobs, even if …  The gatekeeper is temporarily unavailable  The job manager crashes  Your local machine crashes  The network goes down

July 11-15, 2005 Lecture3: Grid Job Management 19 Remote Resource Access: Globus “globusrun myjob …” Globus GRAM Protocol Globus JobManager fork() Organization A Organization B

July 11-15, 2005 Lecture3: Grid Job Management 20 Remote Resource Access: Condor-G + Globus + Condor Globus GRAM Protocol Globus GRAM Submit to LRM Organization A Organization B Condor-G myjob1 myjob2 myjob3 myjob4 myjob5 …

July 11-15, 2005 Lecture3: Grid Job Management 21 Example Application … Simulate the behavior of F(x,y,z) for 20 values of x, 10 values of y and 3 values of z (20*10*3 = 600 combinations)  F takes on the average 3 hours to compute on a “typical” workstation ( total = 1800 hours )  F requires a “moderate” (128MB) amount of memory  F performs “moderate” I/O - (x,y,z) is 5 MB and F(x,y,z) is 50 MB  600 jobs

July 11-15, 2005 Lecture3: Grid Job Management 22 Creating a Submit Description File A plain ASCII text file Tells Condor about your job:  Which executable, universe, input, output and error files to use, command-line arguments, environment variables, any special requirements or preferences (more on this later) Can describe many jobs at once (a “cluster”) each with different input, arguments, output, etc.

July 11-15, 2005 Lecture3: Grid Job Management 23 Simple Submit Description File # Simple condor_submit input file # (Lines beginning with # are comments) # NOTE: the words on the left side are not # case sensitive, but filenames are! Universe = vanilla Executable = my_job Queue $ condor_submit myjob.sub

July 11-15, 2005 Lecture3: Grid Job Management 24 Other Condor commands condor_q – show status of job queue condor_status – show status of compute nodes condor_rm – remove a job condor_hold – hold a job temporarily condor_release – release a job from hold

July 11-15, 2005 Lecture3: Grid Job Management 25 Condor-G: Access non-Condor Grid resources Globus middleware deployed across entire Grid remote access to computational resources dependable, robust data transfer Condor job scheduling across multiple resources strong fault tolerance with checkpointing and migration layered over Globus as “personal batch system” for the Grid

July 11-15, 2005 Lecture3: Grid Job Management 26 Condor-G Condor-G Job Description (Job ClassAd) GT2 [.1|2|4] HTTPS CondorPBS/LSFNorduGrid GT4 WSRF Unicore

July 11-15, 2005 Lecture3: Grid Job Management 27 Submitting a GRAM Job In submit description file, specify:  Universe = grid  Grid_Resource = gt2 ‘gt2’ means GRAM2  Optional: Location of file containing your X509 proxy universe = grid grid_resource = gt2 beak.cs.wisc.edu/jobmanager-pbs executable = progname queue

July 11-15, 2005 Lecture3: Grid Job Management 28 How It Works Schedd LSF Personal CondorGlobus Resource GRAM

July 11-15, 2005 Lecture3: Grid Job Management 29 How It Works Schedd LSF Personal CondorGlobus Resource 600 Globus jobs GRAM

July 11-15, 2005 Lecture3: Grid Job Management 30 How It Works Schedd LSF Personal CondorGlobus Resource GridManager 600 Globus jobs GRAM

July 11-15, 2005 Lecture3: Grid Job Management 31 How It Works Schedd LSF Personal CondorGlobus Resource GridManager 600 Globus jobs GRAM

July 11-15, 2005 Lecture3: Grid Job Management 32 How It Works Schedd LSF User Job Personal CondorGlobus Resource GridManager 600 Globus jobs GRAM

July 11-15, 2005 Lecture3: Grid Job Management 33 Grid Universe Concerns What about Fault Tolerance?  Local Crashes What if the submit machine goes down?  Network Outages What if the connection to the remote Globus jobmanager is lost?  Remote Crashes What if the remote Globus jobmanager crashes? What if the remote machine goes down? Condor-G’s persistent job queue lets it recover from all of these failures If a JobManager fails to respond…

July 11-15, 2005Lecture3: Grid Job Management34 This presentation based on: Grid Resources and Job Management Jaime Frey Condor Project, University of Wisconsin-Madison Grid Summer Workshop June 26-30, 2006