Welcome to HTCondor Week #18 (year 33 of our project)

Slides:



Advertisements
Similar presentations
Copyright © 2008 SAS Institute Inc. All rights reserved. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks.
Advertisements

Welcome to HTCondor Week #15 (year #30 for our project)
SLA-Oriented Resource Provisioning for Cloud Computing
Why do I build CI and what did it teach me? Miron Livny Wisconsin Institutes for Discovery Madison-Wisconsin.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor's Use of the Cisco Unified Computing System.
What has been will be again, what has been done will be done again; there is nothing new under the sun. Ecclesiastes 1:9 NOT.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
WORKFLOWS IN CLOUD COMPUTING. CLOUD COMPUTING  Delivering applications or services in on-demand environment  Hundreds of thousands of users / applications.
Welcome to HTCondor Week #14 (year #29 for our project)
1 Condor Team 2010 Established 1985.
Operating Systems Should Manage Accelerators Sankaralingam Panneerselvam Michael M. Swift Computer Sciences Department University of Wisconsin, Madison,
Condor and Distributed Computing David Ríos CSCI 6175 Fall 2011.
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
Grid Computing 7700 Fall 2005 Lecture 17: Resource Management Gabrielle Allen
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Principles for High Throughput Computing Alain Roy Open Science Grid Software Coordinator.
Ways to Connect to OSG Tuesday afternoon, 3:00 pm Lauren Michael Research Computing Facilitator University of Wisconsin-Madison.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
IntroductiontotoHTCHTC 2015 OSG User School, Monday, Lecture1 Greg Thain University of Wisconsin– Madison Center For High Throughput Computing.
Hideko Mills, Manager of IT Research Infrastructure
Windows Operating System Internals - by David A. Solomon and Mark E. Russinovich with Andreas Polze Unit OS6: Device Management 6.1. Principles of I/O.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
High Throughput Parallel Computing (HTPC) Dan Fraser, UChicago Greg Thain, UWisc Condor Week April 13, 2010.
Thinking Outside the Nest Utilizing Enterprise Resources with Condor Bob Nordlund The Hartford Condor Week 2006.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Jonathan Smith Executive Director Grant County EDC Case Study of an Effective Recruitment.
Condor Team Welcome to Condor Week #10 (year #25 for the project)
Grid Computing at The Hartford Condor Week 2008 Robert Nordlund
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
June 10, D0 Use of OSG D0 relies on OSG for a significant throughput of Monte Carlo simulation jobs, will use it if there is another reprocessing.
Ischia, Italy 9th - 21st July Context & Linking Thursday 13 th July Malcolm Atkinson & David Fergusson.
1 Condor Team 2011 Established 1985.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
Center for Predictive Computational Phenotyping (CPCP): Training Plans May 15, 2015 Debora Treu and Whitney Sweeney Center for Predictive Computational.
Aneka Cloud ApplicationPlatform. Introduction Aneka consists of a scalable cloud middleware that can be deployed on top of heterogeneous computing resources.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison High Throughput Computing (HTC)
Condor Services for the Global Grid: Interoperability between OGSA and Condor Clovis Chapman 1, Paul Wilson 2, Todd Tannenbaum 3, Matthew Farrellee 3,
Open Science Grid as XSEDE Service Provider Open Science Grid as XSEDE Service Provider December 4, 2011 Chander Sehgal OSG User Support.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
NeST: Network Storage John Bent, Venkateshwaran V Miron Livny, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau.
April 20023CSG11 Electronic Commerce Transaction processing John Wordsworth Department of Computer Science The University of Reading
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
Condor A New PACI Partner Opportunity Miron Livny
Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC.
(HT)Condor - Past and Future
Dynamic Deployment of VO Specific Condor Scheduler using GT4
Example: Rapid Atmospheric Modeling System, ColoState U
HTCondor at Syracuse University – Building a Resource Utilization Strategy Eric Sedore Associate CIO HTCondor Week 2017.
Outline Expand via Flocking Grid Universe in HTCondor ("Condor-G")
Management of Virtual Machines in Grids Infrastructures
Condor – A Hunter of Idle Workstation
“The HEP Cloud Facility: elastic computing for High Energy Physics” Outline Computing Facilities for HEP need to evolve to address the new needs of the.
Miron Livny John P. Morgridge Professor of Computer Science
Condor: Job Management
US CMS Testbed.
Management of Virtual Machines in Grids Infrastructures
HTCondor and the Network
CGS 3763 Operating Systems Concepts Spring 2013
Introduction to High Throughput Computing and HTCondor
Brian Lin OSG Software Team University of Wisconsin - Madison
NeST: Network Storage Technologies
Cloud Computing Architecture
Wide Area Workload Management Work Package DATAGRID project
Satisfying the Ever Growing Appetite of HEP for HTC
Welcome to (HT)Condor Week #19 (year 34 of our project)
Welcome to HTCondor Week #17 (year 32 of our project)
EGI High-Throughput Compute
Presentation transcript:

Welcome to HTCondor Week #18 (year 33 of our project)

חי has the value of 18

חי means alive

Only that shall happen Which has happened, Only that occur The words of Koheleth son of David, king in Jerusalem ~ 200 A.D. Only that shall happen Which has happened, Only that occur Which has occurred; There is nothing new Beneath the sun! Ecclesiastes Chapter 1 verse 9 Ecclesiastes, (קֹהֶלֶת, Kohelet, "son of David, and king in Jerusalem" alias Solomon, Wood engraving Gustave Doré (1832–1883)

97

A crowded and growing space of distributed execution environments that can benefit from the capabilities offered by HTCondor

But, we are also facing change! (complexity & scale)

1.42B core hours in 12 months! Almost all jobs executed by the OSG leverage (HT)Condor technologies: Condor-G HTCondor-CE Basco Condor Collectors HTCondor overlays HTCondor pools

Additional shared resources that require protection (allocation)

Request Schedule Allocate Protect Monitor Reclaim Account

Miron Livny John P. Morgridge Professor of Computer Science Director Center for High Throughput Computing University of Wisconsin Madison It is all about Storage Management, stupid! (Allocate B bytes for T time units) We do not have tools to manage storage allocations We do not have tools to schedule storage allocations We do not have protocols to request allocation of storage We do not have means to deal with “sorry no storage available now” We do not know how to manage, use and reclaim opportunistic storage We do not know how to budget for storage space We do not know how to schedule access to storage devices

Thank you for building such a wonderful and very much alive (חי) HTC community