17/09/2004 John Kewley Grid Technology Group Introduction to Condor.

Slides:



Advertisements
Similar presentations
Current methods for negotiating firewalls for the Condor ® system Bruce Beckles (University of Cambridge Computing Service) Se-Chang Son (University of.
Advertisements

Building a secure Condor ® pool in an open academic environment Bruce Beckles University of Cambridge Computing Service.
John Kewley STFC e-Science Centre Accessing the Grid from DL 8 th January 2008 Accessing the Grid from DL John Kewley Grid Technology Group E-Science Centre.
John Kewley e-Science Centre CCLRC Daresbury Laboratory 28 th June nd European Condor Week Milano Condor Submissions: Gotchas.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor's Use of the Cisco Unified Computing System.
1 Concepts of Condor and Condor-G Guy Warner. 2 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Dr. David Wallom Use of Condor in our Campus Grid and the University September 2004.
John Kewley e-Science Centre GIS and Grid Computing Workshop 13 th September 2005, Leeds Grid Middleware and GROWL John Kewley
The Difficulties of Distributed Data Douglas Thain Condor Project University of Wisconsin
Condor Overview Bill Hoagland. Condor Workload management system for compute-intensive jobs Harnesses collection of dedicated or non-dedicated hardware.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machine Universe in.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Zach Miller Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
Grid Computing, B. Wilkinson, 20046d.1 Schedulers and Resource Brokers.
Grid Computing, B. Wilkinson, 20046d.1 Schedulers and Resource Brokers.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Web Servers Web server software is a product that works with the operating system The server computer can run more than one software product such as .
Parallel Computing The Bad News –Hardware is not getting faster fast enough –Too many architectures –Existing architectures are too specific –Programs.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
National Alliance for Medical Image Computing Grid Computing with BatchMake Julien Jomier Kitware Inc.
Linux & UNIX Version 5.3 (Power) Version 6.1 (Power) Version 7.1 (Power) AIX Version 11iv2 (PA-RISC/IA64) Version 11iv3 (PA-RISC/IA64) HP-UX Version.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
Prof. Heon Y. Yeom Distributed Computing Systems Lab. Seoul National University FT-MPICH : Providing fault tolerance for MPI parallel applications.
Operating Systems.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
1 1 Vulnerability Assessment of Grid Software Jim Kupsch Associate Researcher, Dept. of Computer Sciences University of Wisconsin-Madison Condor Week 2006.
John Kewley e-Science Centre CCLRC Daresbury Laboratory 28 th June nd European Condor Week Milano Heterogeneous Pools John Kewley
Job Submission Condor, Globus, Java CoG Kit Young Suk Moon.
Grid Computing I CONDOR.
COMP3019 Coursework: Introduction to GridSAM Steve Crouch School of Electronics and Computer Science.
Condor Birdbath Web Service interface to Condor
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
1 The Roadmap to New Releases Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
Condor Project Computer Sciences Department University of Wisconsin-Madison A Scientist’s Introduction.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
NGS Innovation Forum, Manchester4 th November 2008 Condor and the NGS John Kewley NGS Support Centre Manager.
1 The Roadmap to New Releases Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor RoadMap.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
John Kewley Grid Technology Group e-Science Centre Condor: The CCLRC Experience UK Condor Week 2004.
Derek Wright Computer Sciences Department University of Wisconsin-Madison MPI Scheduling in Condor: An.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
SEE-GRID-SCI The SEE-GRID-SCI initiative is co-funded by the European Commission under the FP7 Research Infrastructures contract no.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
FATCOP: A Mixed Integer Program Solver Michael FerrisQun Chen Department of Computer Sciences University of Wisconsin-Madison Jeff Linderoth, Argonne.
Testing Grid Software on the Grid Steven Newhouse Deputy Director.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
FATCOP: A Mixed Integer Program Solver Michael FerrisQun Chen University of Wisconsin-Madison Jeffrey Linderoth Argonne National Laboratories.
John Kewley e-Science Centre All Hands Meeting st September, Nottingham GROWL: A Lightweight Grid Services Toolkit and Applications John Kewley.
22 nd Oct 2008Euro Condor Week 2008 Barcelona 1 Condor Gotchas III John Kewley STFC Daresbury Laboratory
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor NT Condor ported.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison Condor and Virtual Machines.
John Kewley e-Science Centre CCLRC Daresbury Laboratory 15 th March 2005 Paradyn / Condor Week Madison, WI Caging the CCLRC Compute Zoo (Activities at.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 22 February 2006.
Introduction to Operating System (OS)
Chapter 6 Introduction to Network Operating Systems
Basic Grid Projects – Condor (Part I)
Condor: Firewall Mirroring
Condor-G Making Condor Grid Enabled
Module 02 Operating Systems
Presentation transcript:

17/09/2004 John Kewley Grid Technology Group Introduction to Condor

John Kewley Grid Technology 17 th September 2004 Outline oWhat is Condor? oWhat can it be used for? oStatus of DL Condor Pool(s)

John Kewley Grid Technology 17 th September 2004 What is Condor? oA job submission framework which utilises spare computing power within a heterogeneous computer network (Condor pool) oIt supports High-Throughput Computing (HTC), maximising the amount of processing capacity that is utilised over long periods of time. oDeveloped over many years at University of Wisconsin – Madison

John Kewley Grid Technology 17 th September 2004 Basic Features oA Condor pool is a set of resources (clusters, servers and networked workstations), managed by a Central Manager oThe Central Manager matches requests for resources with those resources available within the pool oUser does not need account on machine where job runs, but may submit jobs to the pool from his/her workstation. oHighly extensible resource description and job requirements language which is used to classify/advertise the resources in the pool. oAvailable on multiple platforms.

John Kewley Grid Technology 17 th September 2004 Supported platforms ArchitectureOperating System Hewlett Packard PA-RISC (PA PA8000)HPUX Sun SPARC Sun4m,Sun4c, Sun UltraSPARCSolaris 2.6, 2.7, 2.8, 2.9 Silicon Graphics MIPS (R5000, R8000, R10000)IRIX 6.5 (clipped) Intel x86Red Hat Linux 7.1, 7.2, 7.3, 8.0 Red Hat Linux 9 Windows 2000 Prof + Server, 2003 Server (clipped) Windows XP Professional (clipped) ALPHADigital Unix 4.0 Red Hat Linux 7.1, 7.2, 7.3 (clipped) Tru (clipped) PowerPCMacintosh OS X (clipped) AIX 5.2L (clipped) ItaniumRed Hat Linux 7.1, 7.2, 7.3 (clipped) SuSE Linux Enterprise 8.1 (clipped)

John Kewley Grid Technology 17 th September 2004 Execute MachineSubmit Machine Job Startup Submit Schedd Starter Job Shadow Condor Syscall Lib Startd Central Manager CollectorNegotiator Slide courtesy of University of Wisconsin-Madison

John Kewley Grid Technology 17 th September 2004 Additional Features oCheckpointing and migration of jobs oShared filestore is not required, but can be utilised oInterworking with Globus, oSecurity: GSI, Kerberos oUse of MPI and PVM oWorkflow using DAGMan (Directed Acyclic Graph Manager). oWindows + Unix + Linux + …

John Kewley Grid Technology 17 th September 2004 Execution Environments standard oMust be relinked under condor oSystem calls occur on the submitting resource oJobs may checkpoint and hence be stopped and later restarted from its last checkpoint, and may migrate to another resource oNot available on some platforms (e.g. Windows) oSome restrictions on what can be run. vanilla oAny executable or script, no need for relinking or access to object files oSystem calls happen on the executing resource oNo checkpointing, not so good for long-running jobs. If a job is stopped it will be rescheduled (i.e. compute time is lost). oWorks on all supported platforms (incl Windows) oSome opening of file permissions may be required

John Kewley Grid Technology 17 th September 2004 Possible Uses oUse vanilla universe for jobs which comprise many small (comparatively), independent tasks. oUse standard universe for jobs which will run for long periods. oUtilise the “odds and ends” of the pool for compilation and build tests.

John Kewley Grid Technology 17 th September 2004 Condor Pools at DL oInternal Pool  5 Windows 3x Windows XP Professional 2x Windows 2000 Professional  18 Linux 6x SuSE Linux 9.0 2x SuSE Linux 8.0 5x White Box Enterprise Linux 3.0 3x Red Hat Linux 9 1x Mandrake Linux x Gentoo Linux 1.4 oExternal Pool  6 Linux 2x Red Hat Linux 7.3 4x White Box Enterprise Linux 3.0

John Kewley Grid Technology 17 th September 2004 Build and Test oOur External Pool is being used by the OMII (Open Middleware Infrastructure Institute) for building and testing their latest Grid middleware. oWe intend extending the use of this pool for use as a build and test pool for other Institutions on the UK Grid. oOur internal users are also keen to utilise this build technology to build release packages of their software for many different platforms.

John Kewley Grid Technology 17 th September 2004 User Status We are currently at an early stage with our user community and are helping them setup their code so that it can be run conveniently under Condor. These users are from the following computational science communities: o CCP1 - The electronic structure of molecules o CCP4 - Protein crystallography

John Kewley Grid Technology 17 th September 2004 Summary oCondor can utilise otherwise unused resources (e.g. Windows workstations overnight) oUse vanilla universe for jobs which comprise many small (comparatively), independent tasks oUse standard universe for jobs which will run for long periods (although not on Windows) oCan be used for compilation and build tests