O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 DOE Evaluation of the Cray X1 Mark R. Fahey James B. White III (Trey) Center for Computational.
Advertisements

1 Challenges and New Trends in Data Intensive Science Panel at Data-aware Distributed Computing (DADC) Workshop HPDC Boston June Geoffrey Fox Community.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
Today’s topics Single processors and the Memory Hierarchy
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Budhendra Bhaduri Overview of Geospatial Computing at ORNL Geographic Information Science.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Global Climate Modeling Research John Drake Computational Climate Dynamics Group Computer.
The Data Management Requirements at SNS Shelly Ren & Steve Miller Scientific Computing Group, SNS-ORNL December 11, 2006.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Cluster Computing Applications Project Parallelizing BLAST Research Alliance of Minorities.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Overview of Oak Ridge National Laboratory and its university partnerships Lee Riedinger Associate Laboratory Director University Partnerships December.
NCEP Central Operations Capabilities “Where America’s Climate, Weather, Ocean, and Space Prediction Services Begin” Brent Gordon NCEP/NCO/Systems Integration.
DOE Genomics: GTL Program IT Infrastructure Needs for Systems Biology David G. Thomassen Office of Biological and Environmental Research DOE Office of.
National Weather Service National Weather Service Central Computer System Backup System Brig. Gen. David L. Johnson, USAF (Ret.) National Oceanic and Atmospheric.
Climate 2010 Gary Strand High Performance Network Planning Workshop.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
GTL Facilities Computing Infrastructure for 21 st Century Systems Biology Ed Uberbacher ORNL & Mike Colvin LLNL.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) Lessons Learned or How to Do.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 On-line Automated Performance Diagnosis on Thousands of Processors Philip C. Roth Future.
A High-Performance Scalable Graphics Architecture Daniel R. McLachlan Director, Advanced Graphics Engineering SGI.
Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
R. Scott Studham, Associate Director Advanced Computing April 13, 2004 HPC At PNNL March 2004.
Ted Fox Interim Associate Laboratory Director Energy and Engineering Sciences Oak Ridge, Tennessee March 21, 2006 Oak Ridge National Laboratory.
Adrianne Middleton National Center for Atmospheric Research Boulder, Colorado CAM T340- Jim Hack Running the Community Climate Simulation Model (CCSM)
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
CSE 291A Interconnection Networks Instructor: Prof. Chung-Kuan, Cheng CSE Dept. UCSD Winter-2007.
© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.
CCGrid, 2012 Supporting User Defined Subsetting and Aggregation over Parallel NetCDF Datasets Yu Su and Gagan Agrawal Department of Computer Science and.
CCSM Portability and Performance, Software Engineering Challenges, and Future Targets Tony Craig National Center for Atmospheric Research Boulder, Colorado,
CCSM Tutorial CCSM Software Engineering Group June
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences TF Sustained on Cray X Series SOS 8 April 13,
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
1 Spallation Neutron Source Data Analysis Jessica Travierso Research Alliance in Math and Science Program Austin Peay State University Mentor: Vickie E.
CCSM Performance, Successes and Challenges Tony Craig NCAR RIST Meeting March 12-14, 2002 Boulder, Colorado, USA.
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
1 Accomplishments. 2 Overview of Accomplishments  Sustaining the Production Earth System Grid Serving the current needs of the climate modeling community.
B5: Exascale Hardware. Capability Requirements Several different requirements –Exaflops/Exascale single application –Ensembles of Petaflop apps requiring.
Presented by Visualization at the Leadership Computing Facility Sean Ahern Scientific Computing Center for Computational Sciences.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Welcome to the PRECIS training workshop
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Spallation Neutron Source Project: Update T. E. Mason Oak Ridge National Laboratory.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Data Requirements for Climate and Carbon Research John Drake, Climate Dynamics Group Computer.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Presented by LCF Climate Science Computational End Station James B. White III (Trey) Scientific Computing National Center for Computational Sciences Oak.
Presented by Buddy Bland Leadership Computing Facility Project Director National Center for Computational Sciences Leadership Computing Facility (LCF)
Tackling I/O Issues 1 David Race 16 March 2010.
Presented by Ricky A. Kendall Scientific Computing and Workflows National Institute for Computational Sciences Applications National Institute for Computational.
The Community Climate System Model (CCSM): An Overview Jim Hurrell Director Climate and Global Dynamics Division Climate and Ecosystem.
High throughput biology data management and data intensive computing drivers George Michaels.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
Central Operations Ben Kyger Acting Director / NCEP CIO.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
Software Practices for a Performance Portable Climate System Model
CCSM3’s IPCC Simulations and Lessons Learned
Presentation transcript:

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational Sciences Buddy Bland CCS Director of Operations SOS7 Workshop March 5, 2003

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 Overview  Cray X1 is the commercial name for the “SV2” project that Cray has been building for the NSA for more than 4 years.  Combines multi-streaming vector processors with a memory interconnect system similar to the T3E.  ORNL is evaluating the X1 for the Office of Advanced Scientific Computing Research  Specific details of the configuration are on Rolf’s web page.

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Q1: What is unique in structure and function of your machine? SystemCray X1Power4 Red Storm HP Long's Peak & Quadrics Processor Performance (GF) Memory Bandwidth (B/F) *1.00 Interconnect Bandwidth (B/F) *0.19 Balance! * Corrections by Jim Tomkins

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Center for Computational Sciences Cray X1 System  Picture of the Cray X1 at the factory awaiting shipment to ORNL  Delivery scheduled for March 18th  32 processor, liquid- cooled cabinet  128 GB memory  8 TB disk

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Phase 1b – Summer 2003  256 Vector Processors  1 TB shared memory  32 TB of disk space 3.2 TeraFLOP/s

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1/Black Widow 4 phase evaluation and deployment 1Q20032Q20033Q20034Q20031Q20042Q20043Q TF (3200 CPU)8TF (640 CPU) 3TF (256 CPU) Simulator Phase 1 Phase 2Phase 4 Phase 1 Phase 2 Phase 3 4Q20044Q20051Q2006 Phase 3 120TF Phase 4 BW architecture per DOE apps 8.192TF, 2.621TB, 24TB 120TF, 40TB, 400TB 40.96TF, TB, 102TB 3.2TF, 1TB, 20TB Currently Funded

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Preliminary climate and materials performance on Cray X1 promising Co-Array Fortran SHMEM 32 processors, one p690 POP Performance (higher is better) Climate QMC Performance (smaller is better) 55% of time spent generating formatted output! Materials

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Q2: What characterizes your applications? “Every day the same thing, variety” - Yosemite Sam

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences CCS is focused on capability computing for a few applications SciDAC Astrophysics Genomes to Life Nanophase Materials SciDAC Climate SciDAC Chemistry SciDAC Fusion

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Climate (CCSM) simulation resource projections At current scientific complexity, one century simulation requires 12.5 days Single researcher transfers 80Gb/day and generates 30TB storage each year Science drivers: regional detail / comprehensive model Blue line represents total national resource dedicated to CCSM simulations and expected future growth to meet demands of increased model complexity Red line shows data volume generated for each century simulated CCSM Coupled Model Resolution Configurations: 2002/ /2009 Atmosphere230kmL26 30kmL96 Land 50km 5km Ocean100kmL40 10kmL80 Sea Ice 100km 10km Model years/day 8 8 National Resource (dedicated TF) Storage (TB/century) 1 250

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences GTL resource projections Biological Complexity Comparative Genomics Constraint-based flexible docking 1000 TF 100 TF 10 TF 1 TF* Constrained rigid docking Genome-scale protein threading Community metabolic regulatory, signaling simulations Molecular machine classical simulation Protein machine Interactions Cell, pathway, and network simulation Molecule-based cell simulation * Teraflops Current U.S. computing

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Q3: What prior experience guided you to this choice  Workshops with vendors and users  IBM, Cray, HP, SGI  Long experience in evaluation of new systems

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Sustained performance of 0.02TF for climate calculations on IBM Power4 The Community Climate System Model (CCSM) is fully- coupled, global climate model that provides state-of-the- art computer simulations of Earth’s past, present, and future climate states Scaling to 352 processors is expected to be possible with Federation interconnect and recent improvements in software engineering  T42 with 1 degree ocean  Memory requirements: 11.6 GB  96 IBM Power4 processors  Processor speed: 5.2 GF  Total peak: 500 GF  Sustained performance: 16.2 GF  3.2% of peak  7.9 years simulated per day

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Questions ?