NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.

Slides:



Advertisements
Similar presentations
Conference xxx - August 2003 Fabrizio Gagliardi EDG Project Leader and EGEE designated Project Director Position paper Delivery of industrial-strength.
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
The Internet2 NET+ Services Program Jerry Grochow Interim Vice President CSG January, 2012.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP StoreOnce How to win.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
NPACI: National Partnership for Advanced Computational Infrastructure Supercomputing ‘98 Mannheim CRAY T90 vs. Tera MTA: The Old Champ Faces a New Challenger.
National Partnership for Advanced Computational Infrastructure San Diego Supercomputer Center Evaluating the Tera MTA Allan Snavely, Wayne Pfeiffer et.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
14 July 2000TWIST George Brett NLANR Distributed Applications Support Team (NCSA/UIUC)
Visible Human Networking Update Thomas Hacker June 3, 2000.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 1: The new mainframe.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
SharePoint Portal Server 2003 JAMES WEIMHOLT WEIDER HAO JUAN TURCIOS BILL HUERTA BRANDON BROWN JAMES WEIMHOLT INTRODUCTION OVERVIEW IMPLEMENTATION CASE.
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
Assessment of Core Services provided to USLHC by OSG.
TeraGrid and I-WIRE: Models for the Future? Rick Stevens and Charlie Catlett Argonne National Laboratory The University of Chicago.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Research Support Services Research Support Services.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
HP Integrity PDM Implementations. Large Defense Contractor Competition: Sun, Dell $4.2 M of Integrity/ProLiant/HP 9000 server and storage solutions ChallengeSolutionResult.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Molecular Science in NPACI Russ B. Altman NPACI Molecular Science Thrust Stanford Medical.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Top Issues Facing Information Technology at UAB Sheila M. Sanders UAB Vice President Information Technology February 8, 2007.
Information Resources and Communications University of California, Office of the President UC-Wide Activities in Support of Research and Scholarship David.
AlphaServer UNIX Resource Consolidation.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
CCS Overview Rene Salmon Center for Computational Science.
Numerical Libraries Project Microsoft Incubation Group Mary Beth Hribar Microsoft Corporation CSCAPES Workshop June 10, 2008 Copyright Microsoft Corporation,
Overview of the Texas Advanced Computing Center and International Partnerships Marcia Inger Assistant Director Development & External Relations April 26,
Introduction to Microsoft Windows 2000 Welcome to Chapter 1 Windows 2000 Server.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing - User Environment Anke Kamrath Associate Director, SDSC
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Capacity and Capability Computing using Legion Anand Natrajan ( ) The Legion Project, University of Virginia (
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
25 April Unified Cryptologic Architecture: A Framework for a Service Based Architecture Unified Cryptologic Architecture: A Framework for a Service.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
General Introduction Markus Nordberg (CERN DG-DI-DI) 1
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Clouds , Grids and Clusters
Joint Techs, Columbus, OH
Western Analysis Facility
Introduce yourself Presented by
Presentation transcript:

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI Review July 21, 1999

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Compute resources are at 6 sites U Michigan U Texas Caltech SDSC UC Berkeley U Virginia

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Complementary roles of 6 compute resource sites Leading-edge site (UCSD/SDSC) Very high-performance resources: first teraflops system for U.S. academic community Mid-range sites (U Texas & U Michigan) Smaller systems compatible with LES Support for apps with limited scalability, large-memory jobs, apps development, OS testing, & education Alternate architecture & cluster sites (Caltech, UC Berkeley, UCSD, & U Virginia) Support for leading-edge apps, thrust software development, & evaluation

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE NPACI has high-end computers of exceptional capability Strategic partnerships with IBM, HP, Sun, & Tera Early delivery of very large systems First teraflops IBM SP with 8-way nodes to be at SDSC Largest systems from HP, Sun, & Tera at Caltech & SDSC Deep discounts or outright donations plus other leverage Allocable systems 7 -> 10 systems at 4 sites in FY99 -> FY00 to provide increased diversity Redeployment of SP from SDSC to U Michigan & U Texas Additional systems through extensive leverage 10 systems at 5 sites in FY00

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Evolution of allocable computers

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Teraflops IBM SP coming to SDSC in CY99 Cluster of next-generation SMP nodes 1,184 Power3 processors at 222 MHz: 1.05 teraflops way nodes way nodes 640 GB of memory Current generation switch initially Staged installation 28 2-way nodes in June for software development 4 8-way nodes in July way nodes in August Full teraflops in fall of CY99 Switch upgrade in early CY00 Attractive base for upgrade to 5 teraflops

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE NPACI’s road to 5 teraflops

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Caltech & HP offer alternate path to teraflops based on IA-64 & large shared-memory domains CY99: transition to multi-node HP-UX V2250: 32 PA-8200 processors (240 MHz) installed early CY99 V2500: 128 PA-8500 processors (440 MHz) & 128 GB of memory coming soon in two 64-way ccNUMA domains CY00-CY02: evaluation of SuperDome scalability Next-generation architecture with PA-RISC or IA-64 processors 64-way SMP or 256-way ccNUMA domains CY00: 64 PA-8600 processors CY01: 128 PA-8700 processors CY02: teraflops system with IA-64 McKinley processors Earliest large systems from HP through strategic partnership & leverage from NASA

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Sun & SDSC are also exploring path to teraflops based on large shared-memory domains Previous & current systems (deeply discounted) Wildfire: 28-processor ccNUMA system for scalability testing HPC 10000: 32-way SMP (333 MHz) for data serving Coming systems (donated) HPC 10000: 64-way SMP (400 MHz) with 64 GB of memory for SAC projects and allocated use; coming in August Starcat: 72-way SMP with 72 GB of memory for evaluating potential scalability to teraflops; only alpha system outside of Sun; coming early in CY00 Exceptional opportunity to work with Sun through strategic partnership

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE First Tera MTA is at SDSC Characteristics Multithreaded architecture Shared memory 8 processors now going to 16 later this calendar year Benefits Reduced programming effort: single parallel model for one or many processors Good scalability Leveraged funding

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Tera MTA is competitive with CRAY T90 & has better scalability for PULSE3D heart model

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE NPACI cluster project was initiated To exploit the benefits of clusters High performance Very attractive price/performance Widespread interest within scientific computing community To build upon NPACI expertise and capability

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Near-term activities in cluster project Participate in workshop in early August To develop strategy for creating, enhancing, & maintaining software for high-end production clusters To obtain agreement on priorities & responsibilities with leaders from NPACI, the Alliance, & the broader high-end computing community Develop additional functionality & interoperability To help users move between clusters & other systems To help system administrators manage clusters Use clusters & evaluate their capabilities To advise users on suitability of clusters vs other systems To guide future resource acquisitions, e.g., very large clusters

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE NPACI is a pioneer in high-performance mass storage Large HPSS installations at SDSC & Caltech FY99: 100 TB stored at SDSC, the most by HPSS FY00: 200 TB expected at SDSC, 100 TB at Caltech Performance & capacity upgrades via hardware FY99: larger SP servers, additional STK silos, & new HPGNs at SDSC & Caltech FY00: tape drives that are faster & higher density at SDSC Stability upgrade via software FY99: HPSS 3.2 at SDSC resulting in reduced down time HPSS metadata backups between SDSC & Caltech FY99: by tape FY00: by CalREN-2 (at OC-12: 622 Mbps)

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE HPSS down time at SDSC is much lower since software upgrade

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Applications that require high-speed networks for collaborative research & building the Grid Telescience: remote control of scientific instruments between the U.S. and Japan (vBNS at OC-12) Ingestion of molecular structure data from SLAC to the PDB at SDSC (NTON at 4xOC-48) Federation of clusters at UCSD and Caltech (NTON at 4xOC-48) Backup of HPSS metadata between SDSC and Caltech (CalREN-2 at OC-12) Backup of NCSA’s large disk array by HPSS at SDSC (vBNS at OC-12 -> vBNS+ at OC-48)

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Networking improvements are needed to realize benefits of capability computing Better connectivity for LES to NTON at 4xOC-48 LES to vBNS+ at OC-48 LES to Abilene at OC-12 Caltech to vBNS at OC-12 U Texas to vBNS at OC-3 Other partners in out years More networking support for Applications and network tuning together with CAIDA & NLANR Engineering to design, implement, & integrate networking upgrades Security to maintain secure access & foster best practices throughout partnership