Welcome to HTCondor Week #17 (year 32 of our project)

Slides:



Advertisements
Similar presentations
Welcome to HTCondor Week #15 (year #30 for our project)
Advertisements

ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
TACC’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. Texas Advanced Computing.
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
PDC Enabling Science ParallellDatorCentrum Olle Mulmo
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Welcome to HTCondor Week #14 (year #29 for our project)
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Cactus Computational Frameowork Freely available, modular, environment for collaboratively developing parallel, high- performance multi-dimensional simulations.
Advancing Computational Science in Academic Institutions Organisers: Dan Katz – University of Chicago Gabrielle Allen – Louisiana State University Rob.
Distributed EU-wide Supercomputing Facility as a New Research Infrastructure for Europe Gabrielle Allen Albert-Einstein-Institut, Germany Jarek Nabrzyski.
Integrating Cloud & Cyberinfrastructure Manish Parashar NSF Cloud and Autonomic Computing Center Rutgers, The State University of New Jersey.
1 What is the history of the Internet? ARPANET (Advanced Research Projects Agency Network) TCP/IP (Transmission Control Protocol/Internet Protocol) NSFNET.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Center for Materials for Information Technology an NSF Materials Science and Engineering Center Introduction Lecture 1 G.J. Mankey
Solar Shield project - lessons learned and advances made (ccmc.gsfc.nasa.gov/Solar_Shield) Pulkkinen, A., M. Hesse, S. Habib, F. Policelli, B. Damsky,
1 Condor Team 2011 Established 1985.
HPC Centres and Strategies for Advancing Computational Science in Academic Institutions Organisers: Dan Katz – University of Chicago Gabrielle Allen –
Forward Observer In-Flight Dual Copy System Richard Knepper, Matthew Standish NASA Operation Ice Bridge Field Support Research Technologies Indiana University.
NASA Applied Remote Sensing Training (ARSET) Dr. Ana I. Prados University of Maryland Baltimore County, Joint Center for Earth Systems Technology and NASA/GSFC.
Academy of Creative Computer Youth Center for Community Services and Continuing Education Kuwait University.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
ISTeC Research Computing Open Forum: Using NSF or National Laboratory Resources for High Performance Computing Bhavesh Khemka.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Data Requirements for Climate and Carbon Research John Drake, Climate Dynamics Group Computer.
The Global Organization for Earth System Science Portals Don Middleton National Center for Atmospheric Research; Boulder, Colorado, USA Scientific Computing.
Education, Outreach and Training (EOT) and External Relations (ER) Scott Lathrop Area Director for EOT and ER July 2008.
EGI-InSPIRE EGI-InSPIRE RI EGI strategy towards the Open Science Commons Tiziana Ferrari EGI-InSPIRE Director at EGI.eu.
Some Odds and Ends About Computational Infrastructure Kyle Gross – Operations Support Manager - Open Science Grid Research Technologies.
SEAGrid Gateway 02/09/2016. Outline SEAGrid Production Service Airavata Infrastructure SEAGrid Integration with Airavata Demos.
A Brief Introduction to NERSC Resources and Allocations
Jan Odegard, Ken Kennedy Institute for Information Technology, Rice
CyVerse Tools and Services
Tools and Services Workshop
(HT)Condor - Past and Future
Joslynn Lee – Data Science Educator
Driven by the potential of Distributed Computing to advance Scientific Discovery
HTCondor at Syracuse University – Building a Resource Utilization Strategy Eric Sedore Associate CIO HTCondor Week 2017.
Your career One Approach.
To promote European cooperation among teachers, learners, and researchers on an effective use of ICT in the teaching of Physics.
Jhashuva. U1 Asst. Prof. Dept. of CSE
“The HEP Cloud Facility: elastic computing for High Energy Physics” Outline Computing Facilities for HEP need to evolve to address the new needs of the.
Miron Livny John P. Morgridge Professor of Computer Science
EGI-Engage Engaging the EGI Community towards an Open Science Commons
OSG Rob Gardner • University of Chicago
Grid and Cloud Computing
COMP28112 – Lecture 19 The Grid (and Cloud Computing)
Who is Tim Berners-Lee? Born 8 June 1955 in London (age 59)
High Performance Computing University of Southern California
Biology MDS and Clustering Results
Assistant Director, NSF Computer & Information Science & Engineering
HTCondor and the Network
NEXT Photo courtesy of USDAgov - granted under creative commons licence - attribution.
Computational Science Crosscut
Welcome to HTCondor Week #18 (year 33 of our project)
Introduction to Free HPC Resources: XSEDE
Clouds from FutureGrid’s Perspective
Distributed systems: How did we get here?
Tor Skeie Feroz Zahid Simula Research Laboratory 27th June 2018
Director of Industry Relations
Accelerated Science Discovery
Computer Services Business challenge
SLAC – KIPAC GLAST Physics Elliott Bloom, Roger Blandford Co-Chairs, KIPAC GLAST Physics Department Stanford University SLAC DoE Review June 5-8, 2006.
“What do you see. What do you think about that
Digital Science Center
Satisfying the Ever Growing Appetite of HEP for HTC
Electronics Business challenge
Cloud versus Cloud: How Will Cloud Computing Shape Our World?
A Possible OLCF Operational Model for HEP (2019+)
Welcome to (HT)Condor Week #19 (year 34 of our project)
Presentation transcript:

Welcome to HTCondor Week #17 (year 32 of our project)

In 1996 I introduced the distinction between High Performance Computing (HPC) and High Throughput Computing (HTC) in a seminar at the NASA Goddard Flight Center in and a month later at the European Laboratory for Particle Physics (CERN). In June of 1997 HPCWire published an interview on High Throughput Computing.

“Increased advanced computing capability has historically enabled new science, and many fields today rely on high-throughput computing for discovery.” “Many fields increasingly rely on high-throughput computing” “Recommendation 2.2. NSF should (a) provide one or more systems for applications that require a single, large, tightly coupled parallel computer and (b) broaden the accessibility and utility of these large-scale platforms by allocating high-throughput as well as high-performance workflows to them.”

Thank you for building such a wonderful HTC community