ISTeC Research Computing Open Forum: Using NSF or National Laboratory Resources for High Performance Computing Bhavesh Khemka.

Slides:



Advertisements
Similar presentations
Extreme Scalability RAT Report title=RATs#Extreme_Scalability Sergiu Sanielevici,
Advertisements

NSF Regional Grants Conference St. Louis, MO
TACC’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. Texas Advanced Computing.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program How to write a good HBCU Proposal George Seweryniak DOE Program.
Secure and Trustworthy Cyberspace (SaTC) Program Sam Weber Program Director March 2012.
Senior Design Project: Parallel Task Scheduling in Heterogeneous Computing Environments Senior Design Students: Christopher Blandin and Dylan Machovec.
CASE STUDIES FOR INTERDISCIPLINARY WORK Robin Hexter, Assistant Dean, CMES Dave McCarren, Assistant Director, DBI Cindy Panchisin, Contract & Grant Administrator,
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
© The Trustees of Indiana University Centralize Research Computing to Drive Innovation…Really Thomas J. Hacker Research & Academic Computing University.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
1 Susan Weigert, Project Officer GSEGs Overview of GSEG Management.
Prof. H. J. Siegel Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering and Professor of Computer Science ISTeC Research.
NSF Vision and Strategy for Advanced Computational Infrastructure Vision: NSF Leadership in creating and deploying a comprehensive portfolio…to facilitate.
Welcome to HTCondor Week #14 (year #29 for our project)
Sponsored Projects Administration (SPA) Kathleen Kozar, Director Norey Laug, Associate Director March 7, 2013.
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Integrating Cloud & Cyberinfrastructure Manish Parashar NSF Cloud and Autonomic Computing Center Rutgers, The State University of New Jersey.
Performance Assessment Assessment of Organizational Excellence NSF Advisory Committee for Business and Operations May 5-6, 2005.
1 Metrics for the Office of Science HPC Centers Jonathan Carter User Services Group Lead NERSC User Group Meeting June 12, 2006.
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
Discussion Topics DOE Program Managers and OSG Executive Team 2 nd June 2011 Associate Executive Director Currently planning for FY12 XD XSEDE Starting.
DeAnn Lechtenberger, Ph.D. Texas Tech University Frank Mullins, Ph.D. University of Texas of the Permian Basin.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program Perspective from Washington HBCU Update George Seweryniak DOE.
Tom Furlani Director, Center for Computational Research SUNY Buffalo Metrics for HPC September 30, 2010.
Materials Innovation Platforms (MIP): A New NSF Mid-scale Instrumentation and User Program to Accelerate The Discovery of New Materials MRSEC Director’s.
Strategies for Building Biological Infrastructure And On-Campus STEM Collaborations QEM Network Workshop Baltimore October 21, 2006.
TeraGrid Allocations Discussion John Towns Director, Persistent Infrastructure National Center for Supercomputing Applications University of Illinois.
Virtual Lab for e-Science Towards a new Science Paradigm.
Joint Meeting of the AUS, US, XS Working Groups TG10 Tuesday August 3, hrs Elwood II.
Launching a great program! SURAgrid All Hands Meeting – September 21, 2006.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Advanced Research Computing Projects & Services at U-M
High-Performance Computing Working Group Working Group Break-Out Session Discussion IMAG MSM Meeting 2015 Compiled by Ravi Radhakrishnan and Talid Sinno,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
1 Policy Recommendations for TeraGrid Resource Allocation Process Richard Moore TeraGrid’08 - June 2008 These are draft recommendations.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
Perkins Mini-Grant Competition Owens Community College Putting the “FUN” back in Federal and State FUNding! –Heath Huber:
Open Science Grid as XSEDE Service Provider Open Science Grid as XSEDE Service Provider December 4, 2011 Chander Sehgal OSG User Support.
1 Why is Digital Curation Important for Workforce and Economic Development? Alan Blatecky Office of Cyberinfrastructure Symposium on Digital Curation in.
MAPS Middleware Action Plan & Strategy Project Middleware Action Plan & Strategy Project (MAPS) Patricia McMillan, Project Manager.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Tapping into National Cyberinfrastructure Resources Donald Frederick SDSC
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
EUDAT receives funding from the European Union's Horizon 2020 programme - DG CONNECT e-Infrastructures. Contract No Collaboration.
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
I've been to the summer camp, now what? May 12, 2016 Mariana Carrasco-T. Assistant Director of Michigan Institute for Computational Discovery & Engineering.
An Innovative Internship Model ( version 3 1/3/2012 )
Some Odds and Ends About Computational Infrastructure Kyle Gross – Operations Support Manager - Open Science Grid Research Technologies.
First Things First Grantee Overview.
A Brief Introduction to NERSC Resources and Allocations
Jan Odegard, Ken Kennedy Institute for Information Technology, Rice
Tools and Services Workshop
Joslynn Lee – Data Science Educator
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Status and Challenges: January 2017
National Vision for High Performance Computing
The Shifting Landscape of CI Funding
Introduction to XSEDE Resources HPC Workshop 08/21/2017
NSF : CIF21 DIBBs: Middleware and High Performance Analytics Libraries for Scalable Data Science PI: Geoffrey C. Fox Software: MIDAS HPC-ABDS.
ESnet and Science DMZs: an update from the US
APICS Chapter innovation fund
A Possible OLCF Operational Model for HEP (2019+)
Welcome to (HT)Condor Week #19 (year 34 of our project)
HPC resources LHC experiments are using (as seen from ATLAS - CMS lens) erhtjhtyhy Doug Benjamin Argonne National Lab.
Welcome to HTCondor Week #17 (year 32 of our project)
Presentation transcript:

ISTeC Research Computing Open Forum: Using NSF or National Laboratory Resources for High Performance Computing Bhavesh Khemka

Off-campus Federal HPC Resources Available federal (“free”) HPC resource providers  NSF  XSEDE program  Blue Waters program  government laboratories  domain specific resources  Pathogen Portal  DIAG (Data Intensive Academic Grid) gaining access to NSF machines  applying to grants that award HPC time gaining access to HPC resources of government labs  by having collaborative projects with them  applying to grants that award HPC time 2

NSF Resources – XSEDE Program startup allocations – usually for experimenting with XSEDE platforms, application development, etc.  quick turn-around time for application  awards are for a year research allocations – needs formal request documents and CVs of PIs/Co-PIs  justify the allocation requested with results (obtained from a startup allocation usually)  submission periods are available 4 times a year  approved allocations begin in 3 months  awards are for a year 3

NSF Resources – Blue Waters Program at least 80% of the Blue Waters system is available to researchers through an NSF application NSF applications open annually and award time for a year “Proposers must show a compelling science or engineering challenge that will require petascale computing resources.”  to put in perspective: equivalent to ~54x ISTeC Cray (ISTeC Cray has a peak performance of 19 teraflops) 4

Access to Titan at Oak Ridge National Laboratory three different programs in which one can apply  INCITE (once a year)  “focus on projects that use a large fractions of the system or require unique architectural infrastructure that cannot be performed anywhere else”  ASCR Leadership Computing Challenge (once a year)  “high-risk, high-payoff simulations in areas directly related to the DOE mission”  Director’s Discretion (anytime)  “short-duration projects” (usually INCITE and ALCC scaling experiments and testing) allocations are for a year and require quarterly reports and a close-out report 5

Concerns Using Federal Resources need to submit an application (usually) a few months in advance problem type and/or size must match compute center’s interests  usually hard for small to medium sized applications getting compute time awarded is competitive very little technical support or help is provided different centers have different compute systems and so learning can be an issue  especially for researchers who run applications and store results in machines from different centers allocations are for a year at most and need to reapply moving data back and forth can be time-consuming 6

Thank You Questions? Feedback? Your experience with federal HPC resources? 7