11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,

Slides:



Advertisements
Similar presentations
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Advertisements

SLA-Oriented Resource Provisioning for Cloud Computing
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
CHAP Meeting 4 October 2007 CISL Update Operations and Services CISL HPC Advisory Panel Meeting 4 October 2007 Tom Bettge Director of Operations and Services.
Information Technology Center Introduction to High Performance Computing at KFUPM.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Academic and Research Technology (A&RT)
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
UCAR CONFIDENTIAL NCAR’s Response to upcoming OCI Solicitations Richard Loft SCD Deputy Director for R&D.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
High Performance Computing with cloud Xu Tong. About the topic Why HPC(high performance computing) used on cloud What’s the difference between cloud and.
Aim High…Fly, Fight, Win NWP Transition from AIX to Linux Lessons Learned Dan Sedlacek AFWA Chief Engineer AFWA A5/8 14 MAR 2011.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
The Interactive Ensemble Coupled Modeling Strategy Ben Kirtman Center for Ocean-Land-Atmosphere Studies And George Mason University.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
and beyond Office of Vice President for Information Technology.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
NSTXpool Computer Upgrade WP #1685 Bill Davis December 9, 2010.
Outline IT Organization SciComp Update CNI Update
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Overview of the New Blue Gene/L Computer Dr. Richard D. Loft Deputy Director of R&D Scientific Computing Division National Center for Atmospheric Research.
Computer Science Section National Center for Atmospheric Research Department of Computer Science University of Colorado at Boulder Blue Gene Experience.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
NLIT May 26, 2010 Page 1 Computing Jefferson Lab Users Group Meeting 8 June 2010 Roy Whitney CIO & CTO.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
HEPiX Karlsruhe May 9-13, 2005 Operated by the Southeastern Universities Research Association for the U.S. Department of Energy Thomas Jefferson National.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA Terascale Clusters Dan Reed Director,
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
SCD Update Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder, CO USA User Forum May 2005.
Copyright © 2003 University Corporation for Atmospheric ResearchSponsored by the National Science Foundation NCAR Computing Update Tom Engel Scientific.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
CCSM Performance, Successes and Challenges Tony Craig NCAR RIST Meeting March 12-14, 2002 Boulder, Colorado, USA.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
Seaborg Decommission James M. Craw Computational Systems Group Lead NERSC User Group Meeting September 17, 2007.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
The Community Climate System Model (CCSM): An Overview Jim Hurrell Director Climate and Global Dynamics Division Climate and Ecosystem.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
Surviving a Mainframe Upgrade APPA – Business & Finance Conference Minneapolis, Minnesota September 19, 2006 Wayne Turnbow IS Department Manager.
Jefferson Lab Site Report Sandy Philpott HEPiX Fall 07 Genome Sequencing Center Washington University at St. Louis.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
LQCD Computing Project Overview
Lattice QCD Computing Project Review
HPC System Acquisition and Service Provision
OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID
Presentation transcript:

11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder, CO USA

11 January 2005 Outline Current Events / News Current Computing Capacity at NCAR Future Computing Capacity at NCAR

11 January 2005 Current Events / News IBM Power3 blackforest decommissioned Jan 10 (yesterday!) IBM e325 Linux Cluster lightning begins production Feb 1 Machine Room Shutdowns: –Feb 24-27:Chiller Upgrade Phase II –May (1 day):Chiller Upgrade Phase III Introduction of LSF to manage batch submissions, scheduling, and accounting (not bluesky).

11 January 2005 Current HPC Environment….

11 January 2005

New Linux Cluster: lightning Linux Cluster –256 processors (128 dual node configuration) –2.2 GHz AMD Opteron processors –4 GB/node –Myricom Myrinet interconnect –6 TByte FastT500 RAID with GPFS Performance Characteristics –40% faster than bluesky (1.3 GHz POWER4) cluster on parallel POP and CAM simulations –75 Gflops on WRF benchmark (full system) Accounts – –provide short description of tasks, codes, job sizes

11 January 2005 Computing Demand Science Driving Demand for Scientific Computing Summer 2004: CSL Requests 1.5x Availability Sept 2004: NCARRequests 2x Availability Sept 2004: University Requests 3x Availability

11 January 2005 Supercomputers are well utilized yet average job queue-wait times * are measured in minutes, not hours or days Sep ’ 04FY04 Bluesky 8-way LPARs 91%88% Bluesky 32-way LPARs 98%93% (Regular Queue) CSLCommunity Bluesky 8- way 86m31m Bluesky 32- way 40m34m Servicing the Demand * September 2004 average

11 January 2005 Future HPC at NCAR……

11 January 2005 NCAR/SCD Position Year 1996 Procurement IBM Power3

11 January 2005 SCD Strategic Plan: High-End Computing Within the current funding envelop, achieve a 25-fold increase over current sustained computing capacity in five years. SCD intends as well to pursue opportunities for substantial additional funding for computational equipment and infrastructure to support the realization of demanding institutional science objectives. SCD will continue to investigate and acquire experimental hardware and software systems. IBM Linux Cluster IBM BlueGene/L (~ 4+ fold in 1Q2006) 1Q2005

11 January 2005 SCD Target Capacity

11 January 2005 Mass Storage Archival…..

11 January 2005

Scientific Computing Division Strategic Plan to serve the computing, research and data management needs of atmospheric and related sciences.

11 January 2005 Questions