Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Advertisements

Priority Research Direction Key challenges General Evaluation of current algorithms Evaluation of use of algorithms in Applications Application of “standard”
Broader Impacts: Meaningful Links between Research and Societal Benefits October 23, 2014 Martin Storksdieck I Center for Research on Lifelong STEM Learning.
High-Performance Computing
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Blue Waters An Extraordinary Computing Resource for Advancing.
Parallel Research at Illinois Parallel Everywhere
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
4.1.5 System Management Background What is in System Management Resource control and scheduling Booting, reconfiguration, defining limits for resource.
Petascale System Requirements for the Geosciences Richard Loft SCD Deputy Director for R&D.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Connect communicate collaborate View on eResearch 2020 study Draft report on “The Role of e-Infrastructures in the Creation of Global Virtual Research.
Alaska Power Association Annual Meeting August 2009 Visions for a Sustainable Energy Future: Impacts and Choices.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
02/07/2001 EOSDIS Core System (ECS) COTS Lessons Learned Steve Fox
Exascale Evolution 1 Brad Benton, IBM March 15, 2010.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Future of High Performance Computing Thom Dunning National Center.
Information Technology at Purdue Presented by: Dr. Gerry McCartney Vice President and CIO, ITaP HPC User Forum September 8-10, 2008 Using SiCortex SC5832.
Computer Science and Engineering - University of Notre Dame Jimmy Neutron CSE 40827/60827 – Ubiquitous Computing November 13, 2009 Project Presentation.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Advantages Cloud Computing. customers only pay for the access and interfaces that they need. The customer buys only the services they need Cost Advantages.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
A collaborative, hands-on way to use technology to solve real world problems.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Optimum System Balance for Systems of Finite Price John D. McCalpin, Ph.D. IBM Corporation Austin, TX SuperComputing 2004 Pittsburgh, PA November 10, 2004.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
What IS a Journeyman Programmer? Why this program?
Geosciences - Observations (Bob Wilhelmson) The geosciences in NSF’s world consists of atmospheric science, ocean science, and earth science Many of the.
Co-Design 2013 Summary Exascale needs new architectures due to slowing of Dennard scaling (since 2004), multi/many core limits New programming models,
A Framework for Visualizing Science at the Petascale and Beyond Kelly Gaither Research Scientist Associate Director, Data and Information Analysis Texas.
Writing a Research Proposal 1.Label Notes: Research Proposal 2.Copy Notes In Your Notebooks 3.Come to class prepared to discuss and ask questions.
IDC HPC USER FORUM Weather & Climate PANEL September 2009 Broomfield, CO Panel questions: 1 response per question Limit length to 1 slide.
Experts in numerical algorithms and High Performance Computing services Challenges of the exponential increase in data Andrew Jones March 2010 SOS14.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
A Data Centre for Science and Industry Roadmap. INNOVATION NETWORKING DATA PROCESSING DATA REPOSITORY.
Challenges of Coping with Funding and Data Management in a Changing World Rick Lyons Director Infectious Disease Research Center.
Breakout Group: Debugging David E. Skinner and Wolfgang E. Nagel IESP Workshop 3, October, Tsukuba, Japan.
Lawrence Livermore National Laboratory BRdeS-1 Science & Technology Principal Directorate - Computation Directorate How to Stop Worrying and Learn to Love.
Data Center & Large-Scale Systems (updated) Luis Ceze, Bill Feiereisen, Krishna Kant, Richard Murphy, Onur Mutlu, Anand Sivasubramanian, Christos Kozyrakis.
Construction of Computational Segment at TSU HEPI Erekle Magradze Zurab Modebadze.
B5: Exascale Hardware. Capability Requirements Several different requirements –Exaflops/Exascale single application –Ensembles of Petaflop apps requiring.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
PRP End-to-End Technical Requirements From Science Applications Frank Würthwein moderator 16 October 2015.
Connecting Risk, Opportunity and Knowledge for Innovation in Water Management Bernadette Conant Executive Director Canadian Water Network CEC JPAC Meeting.
Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.
Who are the users? BioExcelCoEGSSECAMEoCoE academia and industry policy makers, public bodies, NGOs, industries, academia, SME extended CECAM academic.
Professor Arthur Trew Director, EPCC EPCC: KT in novel computing.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
Tackling I/O Issues 1 David Race 16 March 2010.
Applications Scaling Panel Jonathan Carter (LBNL) Mike Heroux (SNL) Phil Jones (LANL) Kalyan Kumaran (ANL) Piyush Mehrotra (NASA Ames) John Michalakes.
CERN VISIONS LEP  web LHC  grid-cloud HL-LHC/FCC  ?? Proposal: von-Neumann  NON-Neumann Table 1: Nick Tredennick’s Paradigm Classification Scheme Early.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Importance of Methods’ Selection in the Geosciences Studies and Exploration Mustafa M Hariri Geosciences Department King Fahd University of Petroleum &
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
High Performance Computing (HPC)
NVIDIA’s Extreme-Scale Computing Project
Appro Xtreme-X Supercomputers
The Earth Simulator System
Toward a Unified HPC and Big Data Runtime
Coordinator: DKRZ Weather Climate HPC
High-Performance Computing
Government, Central/Federal
Facts About High-Performance Computing
Presentation transcript:

Exscale – when will it happen? William Kramer National Center for Supercomputing Applications

Molecular ScienceWeather & Climate Forecasting Earth ScienceAstronomy Health Sustained Petascale computing will enable advances in a broad range of science and engineering disciplines: Astrophysics Life ScienceMaterials 2

Blue Waters Computing System 3 System AttributeRangerBlue Waters VendorSunIBM ProcessorAMD BarcelonaIBM Power7 Peak Performance (PF)0.579 >10 Sustained Performance (PF) 1 Number of Cores/Chip48 Number of Processor Cores62,976>300,000 Amount of Memory (TB)123>1.2 Interconnect Bisection BW (TB/s)~4 Amount of Disk Storage (PB) I/O Aggregate BW (TB/s)?1.5 Amount of Archival Storage (PB)2.5 (20)>500 External Bandwidth (Gbps) >202~3.510>>10>10>200>10

Observations Real sustained PF is ~2 years after peak PF Sustained Petascale took O($1B) for the first system O($1.7B) for the first two systems In the best case, Peak EF is probably O($2-3B) time frame for the first system O($4B) for the first two systems There are very real HW and SW challenges for computer design that have to be solved regardless of Exascale Scale (HW, System SW, Application SW) Reliability Balance Communications Middle level SW ….. With a very few notable exceptions such as Charm++ based codes, as system scale increases, the application base and science team able to use the largest systems decreases proportionally 4

Key Issues for Exascale – beyond the technical If Exascale is not started soon, development expertise will have to be recreated - adding cost and delay Is the best value for Exascale in one system? Is the cost of one system worth it Is Exascale in aggregate worth it (and how big of a building block?, data movement, etc) Should we provide (~5-10) PF systems linked in a data infrastructure to do Exascale science problems? More familiar and evolutionary The energy debate needs to be put into context with everything else – use true TCO, rather then one component It is not clear that people will use an EF system for real science if we build it the way some expect. Need clear – really meaningful metrics for sustain (aka time to solution) performance based on needed use rather than easiest use Not Peak Flops nor Linpack Flops Holistic – ∑{Performance, Effectiveness, Resiliency, Consistency, Usability}/TCO Most discussions on Exascale and even Petascale talk about the crisis in the data deluge, but that is little discussion in the HPC community. So, why should the community be pushing "Exaflops" rather than "Yottabytes“ in order to improve science productivity and quality? 5

Conclusion Essentially whether Exascale can be done in the time frame people are talking about depends how one defines success and on how much money someone spends If there is a public will - and it appears there is within parts of the administration but not sure about congress - EF can be done. Not clear there is fundamental motivation for the public needed Not clear industrial policy is sufficient without critical S&E driver(s) The schedule (2018), cost (~$3B) and scope (EF for 20MW that is usable by many) is compromised There will be Exascale systems In some time period but not in unless unprecedented public funding occurs immediately 6 Presentation Title