Jeffrey P. Gardner Pittsburgh Supercomputing Center

Slides:



Advertisements
Similar presentations
Security as an Element of Cyberinfrastructure Guy Almes CISE Directorate Shared Cyberinfrastructure Division Secure IT 2005 Conference April 21, 2005.
Advertisements

Libra: An Economy driven Job Scheduling System for Clusters Jahanzeb Sherwani 1, Nosheen Ali 1, Nausheen Lotia 1, Zahra Hayat 1, Rajkumar Buyya 2 1. Lahore.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Dominik Stoklosa Poznan Supercomputing and Networking Center, Supercomputing Department EGEE 2007 Budapest, Hungary, October 1-5 Workflow management in.
Dominik Stokłosa Pozna ń Supercomputing and Networking Center, Supercomputing Department INGRID 2008 Lacco Ameno, Island of Ischia, ITALY, April 9-11 Workflow.
Introduction to the Grid Roy Williams, Caltech. Enzo Case Study Simulated dark matter density in early universe N-body gravitational dynamics (particle-mesh.
Designing Services for Grid-based Knowledge Discovery A. Congiusta, A. Pugliese, Domenico Talia, P. Trunfio DEIS University of Calabria ITALY
An open source approach for grids Bob Jones CERN EU DataGrid Project Deputy Project Leader EU EGEE Designated Technical Director
Public B2B Exchanges and Support Services
1 Any views expressed related to procedural or explanatory material are those of the author and not necessarily those of the U.S. Census Bureau.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
The ANSI/SPARC Architecture of a Database Environment
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
40 Tips Leveraging the New APICS.org to the Benefit of Your Organization, Members, and Customers! 1.
Cross-site data transfer on TeraGrid using GridFTP TeraGrid06 Institute User Introduction to TeraGrid June 12 th by Krishna Muriki
BT Wholesale October Creating your own telephone network WHOLESALE CALLS LINE ASSOCIATED.
AN INGENIOUS APPROACH FOR IMPROVING TURNAROUND TIME OF GRID JOBS WITH RESOURCE ASSURANCE AND ALLOCATION MECHANISM Shikha Mehrotra Centre for Development.
Discovering Computers Fundamentals, 2012 Edition
© 2005 AT&T, All Rights Reserved. 11 July 2005 AT&T Enhanced VPN Services Performance Reporting and Web Tools Presenter : Sam Levine x111.
Introduction to cloud computing Jiaheng Lu Department of Computer Science Renmin University of China
Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Global Analysis and Distributed Systems Software Architecture Lecture # 5-6.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
Addition 1’s to 20.
25 seconds left…...
Week 1.
We will resume in: 25 Minutes.
- 1 - Defense Security Service Background: During the Fall of 2012 Defense Security Service will be integrating ISFD with the Identity Management (IdM)
User Introduction to the TeraGrid 2007 SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
Science Gateways on the TeraGrid Von Welch, NCSA (with thanks to Nancy Wilkins-Diehr, SDSC for many slides)
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Simo Niskala Teemu Pasanen
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
National Center for Supercomputing Applications GridChem: Integrated Cyber Infrastructure for Computational Chemistry Sudhakar.
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
1 Preparing Your Application for TeraGrid Beyond 2010 TG09 Tutorial June 22, 2009.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
RI User Support in DEISA/PRACE EEF meeting 2 November 2010, Geneva Jules Wolfrat/Axel Berg SARA.
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA Terascale Clusters Dan Reed Director,
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
TeraGrid Institute: Allocation Policies and Best Practices David L. Hart, SDSC June 4, 2007.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
TeraGrid Overview John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory March 25,
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Tapping into National Cyberinfrastructure Resources Donald Frederick SDSC
GridShell/Condor: A virtual login Shell for the NSF TeraGrid (How do you run a million jobs on the NSF TeraGrid?) The University of Texas at Austin.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Joint Techs, Columbus, OH
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Presentation transcript:

Jeffrey P. Gardner Pittsburgh Supercomputing Center gardnerj@psc.edu An Introduction to the Jeffrey P. Gardner Pittsburgh Supercomputing Center gardnerj@psc.edu

National Science Foundation TeraGrid The world’s largest collection of supercomputers Boulder, CO

Pittsburgh Supercomputing Center Founded in 1986 Joint venture between Carnegie Mellon University, University of Pittsburgh, and Westinghouse Electric Co. Funded by several federal agencies as well as private industries. Main source of support is National Science Foundation Boulder, CO

Pittsburgh Supercomputing Center PSC is the third largest NSF sponsored supercomputing center BUT we provide over 60% of the computer time used by the NSF research AND PSC most recently had the most powerful supercomputer in the world (for unclassified research) Boulder, CO

Pittsburgh Supercomputing Center SCALE: 3000 processors SIZE: 1 basketball court COMPUTING POWER: 6 TeraFlops (6 trillion floating point operations per second) Will do in 3 hours what a PC will do in a year The Terascale Computing System (TCS) at the Pittsburgh Supercomputing Center Upon entering production in October 2001, the TCS was the most powerful computer in the world for unclassified research Boulder, CO

Pittsburgh Supercomputing Center HEAT GENERATED: 2.5 million BTUs (169 lbs of coal per hour) AIR CONDITIONING: 900 gallons of water per minute (375 room air conditioners) BOOT TIME: ~3 hours The Terascale Computing System (TCS) at the Pittsburgh Supercomputing Center Upon entering production in October 2001, the TCS was the most powerful computer in the world for unclassified research Boulder, CO

Pittsburgh Supercomputing Center Boulder, CO

NCSA: National Center for Super-computing Applications SCALE: 1774 processors ARCHITECHTURE: Intel Itanium2 COMPUTING POWER: 10 TeraFlops The TeraGrid cluster “Mercury” at NCSA Boulder, CO

TACC: Texas Advanced Computing Center SCALE: 1024 processors ARCHITECHTURE: Intel Xeon COMPUTING POWER: 6 TeraFlops The TeraGrid cluster “LoneStar” at TACC Boulder, CO

Before the TeraGrid: Supercomputing “The Old Fashioned way” Each supercomputer center was it’s own independent entity. Users applied for time at a specific supercomputer center Each center supplied its own: compute resources archival resources accounting user support Boulder, CO

The TeraGrid Strategy Creating a unified user environment… Single user support resources. Single authentication point Common software functionality Common job management infrastructure Globally-accessible data storage …across heterogeneous resources 7+ computing architectures 5+ visualization resources diverse storage technologies Create a unified national HPC infrastructure that is both heterogeneous and extensible Boulder, CO

The TeraGrid Strategy Strength through uniformity! A major paradigm shift for HPC resource providers Make NSF resources useful to a wider community TeraGrid Resource Partners Strength through uniformity! Strength through diversity! Boulder, CO

TeraGrid Components Compute hardware Intel/Linux Clusters Alpha SMP clusters IBM POWER3 and POWER4 clusters SGI Altix SMPs SUN visualization systems Cray XT3 (PSC July 20) IBM Blue Gene/L (SDSC Oct 1) Boulder, CO

TeraGrid Components Large-scale storage systems hundreds of terabytes for secondary storage Very high-speed network backbone (40Gb/s) bandwidth for rich interaction and tight coupling Grid middleware Globus, data management, … Next-generation applications Boulder, CO

Building a System of Unprecidented Scale 40+ teraflops compute 1+ petabyte online storage 10-40Gb/s networking Boulder, CO

TeraGrid Resources Compute Resources Online Storage Mass Storage ANL/ UC Caltech CACR IU NCSA ORNL PSC Purdue SDSC TACC Compute Resources Itanium2 (0.5 TF) IA-32 (0.8 TF) (0.2 TF) (2.0 TF) (10 TF) SGI SMP (6.5 TF) (0.3 TF) XT3 TCS (6 TF) Marvel Hetero (1.7 TF) (4.4 TF) Power4 (1.1 TF) (6.3 TF) Sun (Vis) Online Storage 20 TB 155 TB 32 TB 600 TB 1 TB 150 TB 540 TB 50 TB Mass Storage 1.2 PB 3 PB 2.4 PB 6 PB 2 PB Data Collections Yes Visualization Instruments Network (Gb/s,Hub) 30 CHI LA 10 ATL Boulder, CO

“Grid-Like” Usage Scenarios Currently Enabled by the TeraGrid “Traditional” massively parallel jobs Tightly-coupled interprocessor communication storing vast amounts of data remotely remote visualization Thousands of independent jobs Automatically scheduled amongst many TeraGrid machines Use data from a distributed data collection Multi-site parallel jobs Compute upon many TeraGrid sites simultaneously TeraGrid is working to enable more! Boulder, CO

Allocations Policies Any US researcher can request an allocation Policies/procedures posted at: http://www.paci.org/Allocations.html Online proposal submission https://pops-submit.paci.org/ Boulder, CO

Allocations Policies Different levels of review for different size allocations DAC: “Development Allocation Committee” up to 30,000 Service Units (“SUs”, 1 SU =~ 1 CPU Hour) only a one paragraph abstract required Must focus on developing an MRAC or NRAC application accepted continuously! MRAC: “Medium Resource Allocation Committee” <200,000 SUs/year reviewed every 3 months next deadline July 15, 2005 (then October 21) NRAC: “National Resource Allocation Committee” >200,000 SUs/year reviewed every 6 months next deadline July 15, 2005 (then January 2006) Boulder, CO

Accounts and Account Management Once a project is approved, the PI can add any number of users by filling out a simple online form User account creation usually takes 2-3 weeks TG accounts created on ALL TG systems for every user single US mail packet arriving for user accounts and usage synched through centralized database Boulder, CO

Roaming and Specific Allocations R-Type: “roaming” allocations can be used on any TG resource usage debited to a single (global) allocation of resource maintained in a central database S-Type: “specific” allocations can only be used on specified resource (All S-only awards come with 30,000 roaming SUs to encourage roaming usage of TG) Boulder, CO

Useful links TeraGrid website Policies/procedures posted at: http://www.teragrid.org Policies/procedures posted at: http://www.paci.org/Allocations.html TeraGrid user information overview http://www.teragrid.org/userinfo/index.html Summary of TG Resources http://www.teragrid.org/userinfo/guide_hardware_table.html Summary of machines with links to site-specific user guides (just click on the name of each site) http://www.teragrid.org/userinfo/guide_hardware_specs.html Email: help@teragrid.org Boulder, CO