© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Advertisements

© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
Parallel Research at Illinois Parallel Everywhere
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
Analysis and Performance Results of a Molecular Modeling Application on Merrimac Erez, et al. Stanford University 2004 Presented By: Daniel Killebrew.
Chapter 13 The First Component: Computer Systems.
Lecture 1: Introduction to High Performance Computing.
1 Presenter: Chien-Chih Chen Proceedings of the 2002 workshop on Memory system performance.
Energy Model for Multiprocess Applications Texas Tech University.
Aim High…Fly, Fight, Win NWP Transition from AIX to Linux Lessons Learned Dan Sedlacek AFWA Chief Engineer AFWA A5/8 14 MAR 2011.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Presenter MaxAcademy Lecture Series – V1.0, September 2011 Introduction and Motivation.
F1031 COMPUTER HARDWARE CLASSES OF COMPUTER. Classes of computer Mainframe Minicomputer Microcomputer Portable is a high-performance computer used for.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Hardware -Computer Organization. Hardware & Software A computer system consists of A computer system consists of –Hardware: anything you can touch, smell,
1 Challenges Facing Modeling and Simulation in HPC Environments Panel remarks ECMS Multiconference HPCS 2008 Nicosia Cyprus June Geoffrey Fox Community.
Domain Applications: Broader Community Perspective Mixture of apprehension and excitement about programming for emerging architectures.
HPC Technology Track: Foundations of Computational Science Lecture 1 Dr. Greg Wettstein, Ph.D. Research Support Group Leader Division of Information Technology.
© Prentice Hall THE EVOLUTION OF COMPUTER AGE 1. First Generation ( ) - computers were built with vacuum tubes. (electronic tubes that.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
Sogang University Advanced Computing System Chap 1. Computer Architecture Hyuk-Jun Lee, PhD Dept. of Computer Science and Engineering Sogang University.
Chapter 1 Intro to Computer Department of Computer Engineering Khon Kaen University.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
High Performance Embedded Computing © 2007 Elsevier Lecture 3: Design Methodologies Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte Based.
[Tim Shattuck, 2006][1] Performance / Watt: The New Server Focus Improving Performance / Watt For Modern Processors Tim Shattuck April 19, 2006 From the.
Sponsored by the Huntsville Advanced Defense Technology Cluster Initiative (HADTCI)STTRsummit.vcsi.org | Tex Electronics-DAS, LLC 1.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
1 Computer Science & Engineering While Standing On One Foot Elliot Soloway CSE, CoE, UM.
Co-Design 2013 Summary Exascale needs new architectures due to slowing of Dennard scaling (since 2004), multi/many core limits New programming models,
A lower bound to energy consumption of an exascale computer Luděk Kučera Charles University Prague, Czech Republic.
Experts in numerical algorithms and High Performance Computing services Challenges of the exponential increase in data Andrew Jones March 2010 SOS14.
Programming for GCSE Topic 5.1: Memory and Storage T eaching L ondon C omputing William Marsh School of Electronic Engineering and Computer Science Queen.
M U N - February 17, Phil Bording1 Computer Engineering of Wave Machines for Seismic Modeling and Seismic Migration R. Phillip Bording February.
Modeling Billion-Node Torus Networks Using Massively Parallel Discrete-Event Simulation Ning Liu, Christopher Carothers 1.
Workshop on Parallelization of Coupled-Cluster Methods Panel 1: Parallel efficiency An incomplete list of thoughts Bert de Jong High Performance Software.
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
Module 9 Planning and Implementing Monitoring and Maintenance.
B5: Exascale Hardware. Capability Requirements Several different requirements –Exaflops/Exascale single application –Ensembles of Petaflop apps requiring.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
Chapter 1: How are computers organized?. Software, data, & processing ? A computers has no insight or intuition A computers has no insight or intuition.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 3.
Tackling I/O Issues 1 David Race 16 March 2010.
CERN VISIONS LEP  web LHC  grid-cloud HL-LHC/FCC  ?? Proposal: von-Neumann  NON-Neumann Table 1: Nick Tredennick’s Paradigm Classification Scheme Early.
Smart Grid Big Data: Automating Analysis of Distribution Systems Steve Pascoe Manager Business Development E&O - NISC.
1 Components of the Virtual Memory System  Arrows indicate what happens on a lw virtual address data physical address TLB page table memory cache disk.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
High Performance Computing (HPC)
Chapter 1 Introduction.
Computer Science 2 What’s this course all about?
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Green cloud computing 2 Cs 595 Lecture 15.
for the Offline and Computing groups
Super Computing By RIsaj t r S3 ece, roll 50.
Architecture & Organization 1
David Gleich, Ahmed Sameh, Ananth Grama, et al.
Hui Chen, Shinan Wang and Weisong Shi Wayne State University
Architecture & Organization 1
Course Description: Parallel Computer Architecture
Chapter 1: How are computers organized?
Chapter 1 Introduction.
Facts About High-Performance Computing
Presentation transcript:

© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM

© 2005 IBM Corporation Page 2 2 High Performance Computing Trends  Three distinct phases. Past: Exponential growth in processor performance mostly through CMOS technology advances Near Term: Exponential (or faster) growth in level of parallelism. Long Term: Power cost = System cost ; invention required  Curve is not only indicative of peak performance but also performance/$ Past Near Term Long Term 1PF: PF: EF: 201X?

© 2005 IBM Corporation Page G100G1T10T100T1P10P100P1E Power6 QS22 Blade QS22 Rack BG/P Rack Roadrunner BlueGene/P P505Q Rack 10 BG/P Racks 2 Days forecast10 Days forecastMulti-Scale Multi-Physics Climate Models AstrophysicsOcean modelsGlobal warming Hurricane Models Engineering Geosciences Energy (nuclear) Vibroacustic analysisComp. Photo-Lithogr.Plasma ( Fusion ) Full aircraft design Solid Earth (Petroleum, Water, Voids) Nuclear FissionFull automobile Earthquake Airfoil design Weather Climate Life Sciences In vivo bone anal. Peptide analysisMouse brain Full bone anal. Rat brainProtein Folding Human brainG-receptors Rigid dockingMassive rigid dockingFirst principle docking Free Energy based docking Materials Modeling First Principle device simulations Phase TransitionsComp. Spectroscopy Electron Transfer Electronic Structure Calculations Nano-scale modeling Multi-scale Material SimulationsHigh-k materials P575 Rack

© 2005 IBM Corporation Page 4 4  Core Frequencies ~ 2-4 GHz, will not change significantly as we go forward 100,000,000 Cores to deliver an Exaflop  Power At today’s MegaFlops / Watt: 2 GW needed (~$2B/yr) Power reduction will force simpler chips, longer latencies, more caches, nearest neighbor network  Memory and Memory Bandwidth Much less memory / core (price) Much less bandwidth / core (power / technology)  Network Bandwidth Much less network bandwidth per core (price / core) (Full fat tree ~$1B to $4B) Local network connectivity  Reliability Expect algorithms / applications will have to permit / survive hardware fails.  I/O Bandwidth At 1 Byte / Flop, an EXAFLOP system will have 1 EXABYTE of Memory. No disk system can read / write this amount of data in reasonable time. (BG/P 4TB ~1min but disk array ingest at ~15min) Computer Design Challenges  Exascale Computing O(100 M) compute engines working together  Capability delivered has the potential to be truly revolutionary  However Systems will be complex Software will be complex Applications will be complex Data Centers will be complex Maintenance / Management will be complex

© 2005 IBM Corporation Page 5 5 Summary  Why Exascale? Applications not possible with smaller machines Applications with multiple integrated components for complex systems Applications needing many iterations for sensitivity analysis, etc  Exascale has enormous challenges! Power Cost Memory requirements Usability  Users will need time on a successive series of larger platforms to get to the exascale. Code development will be a large undertaking and tools to assist in this effort are critical. Thank You Capability Understanding Complexity