NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Upgrade of Jaguar from Cray XT5 to XK6 Cray Linux Environment operating system Gemini interconnect 3-D Torus Globally addressable memory Advanced synchronization.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13,
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
Discover Cluster Upgrades: Hello Haswells and SLES11 SP3, Goodbye Westmeres February 3, 2015 NCCS Brown Bag.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
HPC at IISER Pune Neet Deo System Administrator
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Small File File Systems USC Jim Pepin. Level Setting  Small files are ‘normal’ for lots of people Metadata substitute (lots of image data are done this.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
The Cray XC30 “Darter” System Daniel Lucio. The Darter Supercomputer.
NCCS NCCS User Forum 24 March NCCS Agenda Welcome & Introduction Phil Webster, CISTO Chief Current System Status Fred Reitz, Operations Manager.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
IM&T Vacation Program Benjamin Meyer Virtualisation and Hyper-Threading in Scientific Computing.
Cray XT3 Experience so far Horizon Grows Bigger Richard Alexander 24 January 2006
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
A Framework for Visualizing Science at the Petascale and Beyond Kelly Gaither Research Scientist Associate Director, Data and Information Analysis Texas.
NCCS User Forum 11 December GSFC NCCS NCCS User Forum12/11/082 Agenda Welcome & Introduction Phil Webster, CISTO Chief Current System Status Fred.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
P ITTSBURGH S UPERCOMPUTING C ENTER R ESOURCES & S ERVICES Marvel 0.3 TF HP GS 1280 SMP OS: Tru64 Unix 2 nodes (128 processors) Nodes: 64 x 1.15 GHz EV67.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
High Performance Computing
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Preparation for the TeraGrid: Account Synchronization and High Performance Storage Bobby House.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Monterey HPDC Workshop Experiences with MC-GPFS in DEISA Andreas Schott
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
NIIF HPC services for research and education
Jay Boisseau, Director Texas Advanced Computing Center
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Appro Xtreme-X Supercomputers
Footer.
Tamnun Hardware.
SiCortex Update IDC HPC User Forum
K computer RIKEN Advanced Institute for Computational Science
K computer RIKEN Advanced Institute for Computational Science
Presentation transcript:

NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead

Management changes  Patricia Kovatch is the Interim NICS Project Director.  Ryan Braby is the new HPC Operations and Technology Integration Group Lead. –Started Jan 1 st. –Over 12 years experience in HPC Systems Administration and Integration. –Experienced with large IBM Power based systems, BlueGene systems, Linux clusters, and Lustre.

Kraken XT5 Specifications Compute processor typeAMD 2.6 GHz Istanbul-6 Compute cores112,896 Compute sockets18,816 Compute nodes9,408 Memory per node16 GB (1.33 GB/core) Total memory147 TB Peak system performance1.17 PF Interconnect topology25 x 16 x 24 Torus/Seastar2+ Parallel file system space3.3 PB (raw) 2.4 PB (usable) Parallel file system peak performance30 GB/s

Athena XT4 Specifications Compute processor typeAMD 2.3 GHz Barcelona-4 Compute cores18,048 Compute sockets4,512 quad-core Compute nodes4,512 Memory per node4 GB (1 GB/core) Total memory17.6 TB Peak system performance0.166 PF Interconnect topology12 x 16 x 24 Torus/Seastar Parallel file system space100 TB (raw) 85 TB (usable) Parallel file system peak performance10 GB/s

Nautilus SGI UltraViolet Specs Compute processor typeIntel ~2.0 GHz Nehalem Compute cores1024 Compute sockets (nodes)128 oct-core Memory per core4 GB Total memory4 TB (NUMA) Accelerators16 NVIDIA Fermi GPUs (8 active) Peak system performance8.2 TF Interconnect topologyNUMAlink5 Parallel file system space1 PB (960 TB useable, GPFS) Parallel file system peak performance24 GB/s

Kraken Job Mix - Annual

Kraken Job Mix – Jan 2011 HPC Ops Report Jan 2011

Kraken and Athena Utilization % Month HPC Ops Report Jan 2011

Cycles Provided to TeraGrid 9

Upcoming Events / Work at NICS  Kraken upgrade to CLE 2.2 Update 3 –March 23 rd, 8am to 5pm. –No changes required by users. –Should improve system stability.  Annual power outage / electrical maintenance –Target is April 2 nd, currently planned for 16 hours.