Applications Scaling Panel Jonathan Carter (LBNL) Mike Heroux (SNL) Phil Jones (LANL) Kalyan Kumaran (ANL) Piyush Mehrotra (NASA Ames) John Michalakes.

Slides:



Advertisements
Similar presentations
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Advertisements

Combining the strengths of UMIST and The Victoria University of Manchester Matthew Livesey, Hemanth John Jose and Yongping Men COMP60611 – Future of large-scale.
CHEP 2012 Computing in High Energy and Nuclear Physics Forrest Norrod Vice President and General Manager, Servers.
Performance Metrics Inspired by P. Kent at BES Workshop Run larger: Length, Spatial extent, #Atoms, Weak scaling Run longer: Time steps, Optimizations,
PANEL Session : The Future of I/O from a CPU Architecture Perspective #OFADevWorkshop.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
Parallel Research at Illinois Parallel Everywhere
Welcome to the 10 th OFA Workshop #OFADevWorkshop.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
ISC’11 | Hamburg, Germany | June 19 – June 23, 2011 | Frühjahrstreffen des ZKI-Arbeitskreises "Supercomputing“ Dr. Horst Gietl.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
Lecture 1: Introduction to High Performance Computing.
Challenges and Solutions for Visual Data Analysis on Current and Emerging HPC Platforms Wes Bethel & Hank Childs, Lawrence Berkeley Lab July 20, 2011.
3DAPAS/ECMLS panel Dynamic Distributed Data Intensive Analysis Environments for Life Sciences: June San Jose Geoffrey Fox, Shantenu Jha, Dan Katz,
CENG 546 Dr. Esma Yıldırım. Copyright © 2012, Elsevier Inc. All rights reserved What is a computing cluster?  A computing cluster consists of.
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
Are their more appropriate domain-specific performance metrics for science and engineering HPC applications available then the canonical “percent of peak”
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
Domain Applications: Broader Community Perspective Mixture of apprehension and excitement about programming for emerging architectures.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Future of High Performance Computing Thom Dunning National Center.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Results Matter. Trust NAG. Numerical Algorithms Group Mathematics and technology for optimized performance Alternative Processors Panel IDC, Tucson, Sept.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
Panel on Training and Developing HPC People HPC User Forum Dearborn MI April 13, 2010 Paul Buerger Avetec/DICE program Jim Kasdorf.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
1 Metrics for the Office of Science HPC Centers Jonathan Carter User Services Group Lead NERSC User Group Meeting June 12, 2006.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
1 Application Scalability and High Productivity Computing Nicholas J Wright John Shalf Harvey Wasserman Advanced Technologies Group NERSC/LBNL.
HPC User Forum Back End Compiler Panel SiCortex Perspective Kevin Harris Compiler Manager April 2009.
Software Working Group Chairman’s Note: This document was prepared by the “software and applications” working group and was received by the entire workshop.
Co-Design 2013 Summary Exascale needs new architectures due to slowing of Dennard scaling (since 2004), multi/many core limits New programming models,
CSC 7600 Lecture 28 : Final Exam Review Spring 2010 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS FINAL EXAM REVIEW Daniel Kogler, Chirag Dekate.
IDC HPC USER FORUM Weather & Climate PANEL September 2009 Broomfield, CO Panel questions: 1 response per question Limit length to 1 slide.
Experts in numerical algorithms and High Performance Computing services Challenges of the exponential increase in data Andrew Jones March 2010 SOS14.
Application Scaling. Doug Kothe, ORNL Paul Muzio, CUNY Jonathan Carter, LLBL Bronis de Supinski, LLNL Mike Heroux, Sandia Phil Jones, LANL Brent Leback,
Panel 21 July, 2015 Panel Exascale computing systems in e-Infrastructures at HPCS 2015 – The International Conference on High Performance Computing & Simulation.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
SOS14 Panel Discussion Software programming environments and tools for GPUs Where are they today? What do we need for tomorrow? Savannah, GA 09 March 2010.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
National Strategic Computing Initiative
Lawrence Livermore National Laboratory BRdeS-1 Science & Technology Principal Directorate - Computation Directorate How to Stop Worrying and Learn to Love.
B5: Exascale Hardware. Capability Requirements Several different requirements –Exaflops/Exascale single application –Ensembles of Petaflop apps requiring.
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.
Lawrence Livermore National Laboratory 1 Science & Technology Principal Directorate - Computation Directorate Scalable Fault Tolerance for Petascale Systems.
Tackling I/O Issues 1 David Race 16 March 2010.
Is MPI still part of the solution ? George Bosilca Innovative Computing Laboratory Electrical Engineering and Computer Science Department University of.
ORNL is managed by UT-Battelle for the US Department of Energy Musings about SOS Buddy Bland Presented to: SOS20 Conference March 25, 2016 Asheville, NC.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Using Pattern-Models to Guide SSD Deployment for Big Data in HPC Systems Junjie Chen 1, Philip C. Roth 2, Yong Chen 1 1 Data-Intensive Scalable Computing.
COMPSCI 110 Operating Systems
Organizations Are Embracing New Opportunities
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Geoffrey Fox, Shantenu Jha, Dan Katz, Judy Qiu, Jon Weissman
Extreme Big Data Examples
FUTURE ICT CHALLENGES IN SCIENTIFIC COMPUTING
Appro Xtreme-X Supercomputers
OCP: High Performance Computing Project
Software Practices for a Performance Portable Climate System Model
Toward a Unified HPC and Big Data Runtime
TeraScale Supernova Initiative
High-Performance Computing
Panel on Research Challenges in Big Data
Presentation transcript:

Applications Scaling Panel Jonathan Carter (LBNL) Mike Heroux (SNL) Phil Jones (LANL) Kalyan Kumaran (ANL) Piyush Mehrotra (NASA Ames) John Michalakes (NCAR) Nir Paikowsky (ScaleMP) Bronis de Supinski (LLNL) Trey White (ORNL)

Panel Question #1 1.Are their more appropriate domain-specific performance metrics for science and engineering HPC applications available then the canonical “percent of peak” or “parallel efficiency and scalability? If so, what are they? Are these metrics driving for weak or strong scaling or both?

Panel Question #2 2.Similar to HPC “disruptive technologies” (memory, power, etc.) thought to be needed in many H/W roadmaps over the next decade to reach exascale, are their looming computational challenges (models, algorithms, S/W) whose resolution will be game changing or required over the next decade?

Panel Question #3 3.What is the role of local (node-based) floating point accelerators (e.g., cell, GPUs, etc.) for key science and engineering applications in the next 3-5 years? Is there unexploited or unrealized concurrency in the applications you are familiar with? If so, what and where is it?

Panel Question #4 4.Should applications continue with current programming models (Fortran, C, C++, PGAS, etc.) and paradigms (e.g., flat MPI, hybrid MPI/OpenMP, etc.) over the next decade? If not, what needs to change?

Panel Question #5 5.How might HPC system-attribute priorities change over the next decade for the science and engineering applications you are familiar with? Attributes to consider are node peak flops; mean time to interrupt, wide-area network bandwidth, node memory capacity, local storage capacity, archival storage capacity, memory latency, interconnect latency, disk latency, interconnect bandwidth, memory bandwidth, and disk bandwidth.