Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences

Slides:



Advertisements
Similar presentations
Technology Drivers Traditional HPC application drivers – OS noise, resource monitoring and management, memory footprint – Complexity of resources to be.
Advertisements

What do we currently mean by Computational Science? Traditionally focuses on the “hard sciences” and engineering –Physics, Chemistry, Mechanics, Aerospace,
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
High-Performance Computing
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update June 12,
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
Presented by Suzy Tichenor Director, Industrial Partnerships Program Computing and Computational Sciences Directorate Oak Ridge National Laboratory DOE.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Global Climate Modeling Research John Drake Computational Climate Dynamics Group Computer.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
Update on the DOE SciDAC Program Vicky White, DOE/HENP Lattice QCD Collaboration Meeting Jefferson Lab, Feb
BERAC Charge A recognized strength of the Office of Science, and BER is no exception, is the development of tools and technologies that enable science.
U.S. Department of Energy’s Office of Science Dr. Raymond Orbach February 25, 2003 Briefing for the Basic Energy Sciences Advisory Committee FY04 Budget.
Knowledge Environments for Science and Engineering: Current Technical Developments James French, Information and Intelligent Systems Division, Computer.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Welcome to HTCondor Week #14 (year #29 for our project)
NERSC User Group Meeting Future Technology Assessment Horst D. Simon NERSC, Division Director February 23, 2001.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
HPCx: Multi-Teraflops in the UK A World-Class Service for World-Class Research Dr Arthur Trew Director.
Kernel and Application Code Performance for a Spectral Atmospheric Global Circulation Model on the Cray T3E and IBM SP Patrick H. Worley Computer Science.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
QCD Project Overview Ying Zhang September 26, 2005.
Networking and Computing Technologies Division Becky Verastegui December 6, 2004 RAMS Workshop.
Chapter 2 Computer Clusters Lecture 2.2 Computer Cluster Architectures.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
Results of the HPC in Europe Taskforce (HET) e-IRG Workshop Kimmo Koski CSC – The Finnish IT Center for Science April 19 th, 2007.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 On-line Automated Performance Diagnosis on Thousands of Processors Philip C. Roth Future.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
Sponsored by the National Science Foundation Embedded Measurement Framework (ERM) PI: Keren Bergman Senior Researcher: Dr. Balagangadhar Bathula Students:
Ted Fox Interim Associate Laboratory Director Energy and Engineering Sciences Oak Ridge, Tennessee March 21, 2006 Oak Ridge National Laboratory.
Mcs/ HPC challenges in Switzerland Marie-Christine Sawley General Manager CSCS SOS8, Charleston April,
Ensuring Our Nation’s Energy Security NCSX News from the Office of Science Presentation to the Basic Energy Sciences Advisory Committee July 22, 2002 Dr.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
1 DOE Office of Science October 2003 SciDAC Scientific Discovery through Advanced Computing Alan J. Laub.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 Enabling Supernova Computations by Integrated Transport and Provisioning Methods Optimized.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
1 Spallation Neutron Source Data Analysis Jessica Travierso Research Alliance in Math and Science Program Austin Peay State University Mentor: Vickie E.
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
Interconnection network network interface and a case study.
U.S. Department of Energy’s Office of Science Dr. Raymond L. Orbach Director, Office of Science April 29, 2004 PRESENTATION FOR THE BIOLOGICAL AND ENVIRONMENTAL.
Status and plans at KEK Shoji Hashimoto Workshop on LQCD Software for Blue Gene/L, Boston University, Jan. 27, 2006.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Data Requirements for Climate and Carbon Research John Drake, Climate Dynamics Group Computer.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
High Performance Computing Kyaw Zwa Soe (Director) Ministry of Science & Technology Centre of Advanced Science & Technology.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
U.S. Department of Energy’s Office of Science Presentation to the Basic Energy Sciences Advisory Committee (BESAC) Dr. Raymond L. Orbach, Director November.
LQCD Computing Project Overview
Clouds , Grids and Clusters
FET Plans FET - Proactive 1.
Collaborations and Interactions with other Projects
Software Practices for a Performance Portable Climate System Model
BlueGene/L Supercomputer
Presentation transcript:

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences Thomas Zacharia Associate Laboratory Director Computing and Computational Sciences Oak Ridge National Laboratory Earth Simulator Rapid Response Meeting May 15-16, 2002

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Charge from Dr. Orbach Review “... current state of the national computer vendor community relative to high performance computing” “... Vision for what realistically should be accomplished in the next five years within the Office of Science in high performance computing”

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Dr. Orbach’s Vision for OSC Computing Statement to ASCAC Committee, May 8, 2002 “… there is a centrality of computation in everything that we do” “… large scale computation is the future of every program in the Office of Science” “… we want to have our own computing program in non-defense computational science”

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY FY 03 Budget Request for OSC Computing Considerably Lower than Required to Meet Goals 0 100,000, ,000, ,000, ,000, ,000, ,000, ,000, ,000, ,000,000 DOE-SCNNSANSF Fiscal Year Budget Dollars

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY As Fraction of Total Budget, OSC is Half NNSA and NSF and Needs Significant Increase to Meet Goals DOE-SCNNSANSF Computing Budget / Total Budget (%)

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Earth Simulator has Heightened Urgency for Infrastructure Strategy for Scientific Computing Critical Steps: Invest in critical software with integrated science, and computer science development teams Deploy scientific computing hardware infrastructure in support of “large scale computation” –Cray, HP, IBM, SGI –IBM is the largest US installation Develop new initiative to support advanced architecture research Top 500 Supercomputers US has been #1 in 12 of 19 lists A concerted effort will be required to regain US leadership in high performance computing. The LINPACK benchmark generally overestimates the effectiveness of an architecture for applications such as climate by a substantial factor. Stability and reliability are also important system properties.

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Invest in Critical Software with Integrated Science and Computer Science Development Teams SciDAC: a Good Start Towards Scientific Computing Software Scientific Applications –Climate Simulation –Computational Chemistry –Fusion – 5 Topics –High Energy Nuclear Physics – 5 Topics Collaboratories –Four Projects Middleware & Network Research –Six Projects Computer Science –Scalable Systems Software –Common Component Architecture –Performance Science and Engineering –Scientific Data Management Applied Mathematics –PDE Linear/Nonlinear Solvers and Libraries –Structured Grids/AMR –Unstructured Grids Dave Bader, SciDAC PI Meeting, Jan 15, 2002, Washington DC

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Deploy Scientific Computing Hardware Infrastructure to Support “Large-Scale Computation” Provide most effective and efficient computing resources for a set of scientific applications Serve as focal point for scientific research community as it adapts to new computing technologies Provide organizational framework needed for multidisciplinary activities –Addressing software challenges requires strong, long term collaborations among disciplinary computational scientists, computer scientists, and applied mathematicians Provide organizational framework needed for development of community codes –Implementing many scientific codes requires wide range of disciplinary expertise Organizational needs will continue to grow as computers advance to petaflops scale Dave Bader, SciDAC PI Meeting, Jan. 15, 2002, Washington, DC

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Earth Simulator has Widened Gap with DOE Scientific Computing Hardware Infrastructure  Top left: comparison between ES and SC resources – highlights widening gap between SC capabilities and others  Top right: comparison between ES and US resources of comparable peak performance – highlights architectural difference and need for new initiative to close the gap  Right: comparison between ES and US resources of comparable cost 1,000 3,000 5,000 7,000 Earth SimulatorSEABORGCHEETAH Simulations years/day Widening Gap 1,000 3,000 5,000 7,000 Earth SimulatorPOWER4 H+ (40TFlops)Power5 (50 TFlops) Simulations years/day Technology Gap 0 2,000 4,000 6,000 8,000 10,000 Earth SimulatorPOWER4 H+ (3*40TFlops)Power5 (3*50 TFlops) Simulations years/day

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Possible U.S. Response in the Near Term for Increased Computing Capacity 40 TFlops Peak 5120 Vector Processors 8 GFlops Processor 8 Processors per Node $500 M Procurement $50M/yr Maintenance Limited Software Investment to date Significant Ancillary Impact on Biology, Nanoscience, Astrophysics, HENP, Fusion 40 TFlops Peak 5120 Power5 Processors 8 GFlops Processor 64 Processors per Node $100 M Procurement $10M/yr Maintenance SciDAC Investment in Computational Science and related ISICs Significant Ancillary Impact on Biology, Nanoscience, Astrophysics, HENP, Fusion Earth Simulator US Alternative

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Best Performance of High Resolution Atmospheric Model Inter-node bandwidth (Mb/s) Performance of Hi-Resolution Atmospheric Model Earth Simulator (2560) AlphaES45 (2048) AlphaES40 (256 ) SP3 WHII (512) T3E (512) GFlops

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Year Dollars/GFlops C/L/D ASCI T3E Beowulfs COTS JPL QCDSP Columbia QCDOC Columbia/IBM Blue Gene/L ASCI Blue ASCI White ASCI Compaq Develop New Initiative to Support Advanced Architecture: BlueGene Offers Possible Option

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY QCDSP (600GF based on Texas Instruments DSP C31) – Gordon Bell Prize for Most Cost Effective Supercomputer in '98 – Columbia University Designed and Built – Optimized for Quantum Chromodynamics (QCD) – 12,000 50MF Processors – Commodity 2MB DRAM QCDOC (20TF based on IBM System-on-a-Chip) – Collaboration between Columbia University and IBM Research – Optimized for QCD – IBM 7SF Technology (ASIC Foundry Technology) – 20,000 1GF processors (nominal) – 4MB Embedded DRAM + External Commodity DDR/SDR SDRAM BlueGene L/D (180TF based on IBM System-on-a-Chip) – Designed by IBM Research in IBM CMOS 8SF Technology – 64, GF processors (nominal) – 4MB Embedded DRAM + External Commodity DDR SDRAM BlueGene Architecture is a (more) General Purpose Machine that builds on QCDOC

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Host System: – Diagnostics, booting, archive – Application dependent requirements System Organization (conceptual) File Server Array – ~ 500 RAID PC servers – Gb Ethernet and/or Infiniband – Application dependent requirements BlueGene/L Processing Nodes Nodes – Two major partitions – nodes production – Platform (256 TFlops peak) – nodes partitioned into code development platforms

Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Summary Continue investment in critical software with integrated science, and computer science development teams Deploy scientific computing hardware infrastructure in support of “large scale computation” Develop new initiative to support advanced architecture research Develop a bold new facilities strategy for OSC computing Increase OSC computing budget to support outlined strategy Without sustained commitment to scientific computing, key computing and computational sciences capabilities, including personnel, will erode beyond recovery.