Jeffrey P. Gardner Pittsburgh Supercomputing Center

Slides:



Advertisements
Similar presentations
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Advertisements

Cross-site data transfer on TeraGrid using GridFTP TeraGrid06 Institute User Introduction to TeraGrid June 12 th by Krishna Muriki
Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
User Introduction to the TeraGrid 2007 SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
Science Gateways on the TeraGrid Von Welch, NCSA (with thanks to Nancy Wilkins-Diehr, SDSC for many slides)
SAN DIEGO SUPERCOMPUTER CENTER Accounting & Allocation Subhashini Sivagnanam SDSC Special Thanks to Dave Hart.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Simo Niskala Teemu Pasanen
Core Services I & II David Hart Area Director, UFP/CS TeraGrid Quarterly Meeting December 2008.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
National Center for Supercomputing Applications GridChem: Integrated Cyber Infrastructure for Computational Chemistry Sudhakar.
Advancing Scientific Discovery through TeraGrid Scott Lathrop TeraGrid Director of Education, Outreach and Training University of Chicago and Argonne National.
August 2007 Advancing Scientific Discovery through TeraGrid Adapted from S. Lathrop’s talk in SC’07
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
TeraGrid Overview Cyberinfrastructure Days Internet2 10/9/07 Mark Sheddon Resource Provider Principal Investigator San Diego Supercomputer Center
1 Preparing Your Application for TeraGrid Beyond 2010 TG09 Tutorial June 22, 2009.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
RNA-Seq 2013, Boston MA, 6/20/2013 Optimizing the National Cyberinfrastructure for Lower Bioinformatic Costs: Making the Most of Resources for Publicly.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
RI User Support in DEISA/PRACE EEF meeting 2 November 2010, Geneva Jules Wolfrat/Axel Berg SARA.
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA Terascale Clusters Dan Reed Director,
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
7. Grid Computing Systems and Resource Management
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
TeraGrid Institute: Allocation Policies and Best Practices David L. Hart, SDSC June 4, 2007.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
TeraGrid Overview John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory March 25,
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Tapping into National Cyberinfrastructure Resources Donald Frederick SDSC
SAN DIEGO SUPERCOMPUTER CENTER Allocation Policies and Proposal Best Practices David L. Hart, TeraGrid Area Director, UFP/CS Presenter:
GridShell/Condor: A virtual login Shell for the NSF TeraGrid (How do you run a million jobs on the NSF TeraGrid?) The University of Texas at Austin.
Background Computer System Architectures Computer System Software.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
Grid and Cloud Computing
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Clouds , Grids and Clusters
Performance Technology for Scalable Parallel Systems
Joint Techs, Columbus, OH
Getting Started with TeraGrid Authentication
Chapter 1: Introduction
TeraGrid Data Transfer
DEISA : integrating HPC infrastructures in Europe Prof
Grid Computing.
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Network Requirements Javier Orellana
Chapter 1: Introduction
Introduction to client/server architecture
Scalable Systems Software for Terascale Computer Centers
The Globus Toolkit™: Information Services
CLUSTER COMPUTING.
Cyberinfrastructure and PolarGrid
Presentation transcript:

Jeffrey P. Gardner Pittsburgh Supercomputing Center gardnerj@psc.edu An Introduction to the Jeffrey P. Gardner Pittsburgh Supercomputing Center gardnerj@psc.edu

National Science Foundation TeraGrid The world’s largest collection of supercomputers CIG MCW, Boulder, CO

Pittsburgh Supercomputing Center Founded in 1986 Joint venture between Carnegie Mellon University, University of Pittsburgh, and Westinghouse Electric Co. Funded by several federal agencies as well as private industries. Main source of support is National Science Foundation CIG MCW, Boulder, CO

Pittsburgh Supercomputing Center PSC is the third largest NSF sponsored supercomputing center BUT last year we provided over 60% of the computer time used by the NSF research AND PSC most recently had the most powerful supercomputer in the world (for unclassified research) CIG MCW, Boulder, CO

Pittsburgh Supercomputing Center SCALE: 3000 processors SIZE: 1 basketball court COMPUTING POWER: 6 TeraFlops (6 trillion floating point operations per second) Will do in 3 hours what a PC will do in a year The Terascale Computing System (TCS) at the Pittsburgh Supercomputing Center Upon entering production in October 2001, the TCS was the most powerful computer in the world for unclassified research CIG MCW, Boulder, CO

Pittsburgh Supercomputing Center HEAT GENERATED: 2.5 million BTUs (169 lbs of coal per hour) AIR CONDITIONING: 900 gallons of water per minute (375 room air conditioners) BOOT TIME: ~3 hours The Terascale Computing System (TCS) at the Pittsburgh Supercomputing Center Upon entering production in October 2001, the TCS was the most powerful computer in the world for unclassified research CIG MCW, Boulder, CO

Pittsburgh Supercomputing Center CIG MCW, Boulder, CO

NCSA: National Center for Super-computing Applications SCALE: 1774 processors ARCHITECHTURE: Intel Itanium2 COMPUTING POWER: 10 TeraFlops The TeraGrid cluster at NCSA CIG MCW, Boulder, CO

TACC: Texas Advanced Computing Center SCALE: 1024 processors ARCHITECHTURE: Intel Xeon COMPUTING POWER: 6 TeraFlops The TeraGrid cluster “LoneStar” at TACC CIG MCW, Boulder, CO

Before the TeraGrid: Supercomputing “The Old Fashioned way” Each supercomputer center was it’s own independent entity. Users applied for time at a specific supercomputer center Each center supplied its own: compute resources archival resources accounting user support CIG MCW, Boulder, CO

The TeraGrid Strategy Creating a unified user environment… Single user support resources. Single authentication point Common software functionality Common job management infrastructure Globally-accessible data storage …across heterogeneous resources 7+ computing architectures 5+ visualization resources diverse storage technologies Create a unified national HPC infrastructure that is both heterogeneous and extensible CIG MCW, Boulder, CO

The TeraGrid Strategy Strength through uniformity! A major paradigm shift for HPC resource providers Make NSF resources useful to a wider community TeraGrid Resource Partners Strength through uniformity! Strength through diversity! CIG MCW, Boulder, CO

TeraGrid Components Compute hardware Intel/Linux Clusters Alpha SMP clusters IBM POWER3 and POWER4 clusters SGI Altix SMPs SUN visualization systems Cray XT3 (PSC July 20) IBM Blue Gene/L (SDSC Oct 1) CIG MCW, Boulder, CO

TeraGrid Components Large-scale storage systems hundreds of terabytes for secondary storage Very high-speed network backbone (40Gb/s) bandwidth for rich interaction and tight coupling Grid middleware Globus, data management, … Next-generation applications CIG MCW, Boulder, CO

Building a System of Unprecidented Scale 40+ teraflops compute 1+ petabyte online storage 10-40Gb/s networking CIG MCW, Boulder, CO

TeraGrid Resources Compute Resources Online Storage Mass Storage ANL/ UC Caltech CACR IU NCSA ORNL PSC Purdue SDSC TACC Compute Resources Itanium2 (0.5 TF) IA-32 (0.8 TF) (0.2 TF) (2.0 TF) (10 TF) SGI SMP (6.5 TF) (0.3 TF) XT3 TCS (6 TF) Marvel Hetero (1.7 TF) (4.4 TF) Power4 (1.1 TF) (6.3 TF) Sun (Vis) Online Storage 20 TB 155 TB 32 TB 600 TB 1 TB 150 TB 540 TB 50 TB Mass Storage 1.2 PB 3 PB 2.4 PB 6 PB 2 PB Data Collections Yes Visualization Instruments Network (Gb/s,Hub) 30 CHI LA 10 ATL CIG MCW, Boulder, CO

“Grid-Like” Usage Scenarios Currently Enabled by the TeraGrid “Traditional” massively parallel jobs Tightly-coupled interprocessor communication storing vast amounts of data remotely remote visualization Thousands of independent jobs Automatically scheduled amongst many TeraGrid machines Use data from a distributed data collection Multi-site parallel jobs Compute upon many TeraGrid sites simultaneously TeraGrid is working to enable more! CIG MCW, Boulder, CO

Allocations Policies Any US researcher can request an allocation Policies/procedures posted at: http://www.paci.org/Allocations.html Online proposal submission https://pops-submit.paci.org/ CIG MCW, Boulder, CO

Allocations Policies Different levels of review for different size allocations DAC: “Development Allocation Committee” up to 30,000 Service Units (“SUs”, 1 SU =~ 1 CPU Hour) only a one paragraph abstract required Must focus on developing an MRAC or NRAC application accepted continuously! MRAC: “Medium Resource Allocation Committee” <200,000 SUs/year reviewed every 3 months next deadline July 15, 2005 (then October 21) NRAC: “National Resource Allocation Committee” >200,000 SUs/year reviewed every 6 months next deadline July 15, 2005 (then January 2006) CIG MCW, Boulder, CO

Accounts and Account Management Once a project is approved, the PI can add any number of users by filling out a simple online form User account creation usually takes 2-3 weeks TG accounts created on ALL TG systems for every user single US mail packet arriving for user accounts and usage synched through centralized database CIG MCW, Boulder, CO

Roaming and Specific Allocations R-Type: “roaming” allocations can be used on any TG resource usage debited to a single (global) allocation of resource maintained in a central database S-Type: “specific” allocations can only be used on specified resource (All S-only awards come with 30,000 roaming SUs to encourage roaming usage of TG) CIG MCW, Boulder, CO

Useful links TeraGrid website Policies/procedures posted at: http://www.teragrid.org Policies/procedures posted at: http://www.paci.org/Allocations.html TeraGrid user information overview http://www.teragrid.org/userinfo/index.html Summary of TG Resources http://www.teragrid.org/userinfo/guide_hardware_table.html Summary of machines with links to site-specific user guides (just click on the name of each site) http://www.teragrid.org/userinfo/guide_hardware_specs.html Email: help@teragrid.org CIG MCW, Boulder, CO