Question 1 How are you going to provide language and/or library (or other?) support in Fortran, C/C++, or another language for massively parallel programming.

Slides:



Advertisements
Similar presentations
Introduction to Grid Application On-Boarding Nick Werstiuk
Advertisements

MPI Message Passing Interface
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
HPC - High Performance Productivity Computing and Future Computational Systems: A Research Engineer’s Perspective Dr. Robert C. Singleterry Jr. NASA Langley.
Software Group © 2006 IBM Corporation Compiler Technology Task, thread and processor — OpenMP 3.0 and beyond Guansong Zhang, IBM Toronto Lab.
Parallel Programming Models and Paradigms
V The DARPA Dynamic Programming Benchmark on a Reconfigurable Computer Justification High performance computing benchmarking Compare and improve the performance.
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Computer Architecture Parallel Processing
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Compiler BE Panel IDC HPC User Forum April 2009 Don Kretsch Director, Sun Developer Tools Sun Microsystems.
Crossing The Line: Distributed Computing Across Network and Filesystem Boundaries.
Independent Component Analysis (ICA) A parallel approach.
TotalView Debugging Tool Presentation Josip Jakić
J. J. Rehr & R.C. Albers Rev. Mod. Phys. 72, 621 (2000) A “cluster to cloud” story: Naturally parallel Each CPU calculates a few points in the energy grid.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
GPU Architecture and Programming
Alternative ProcessorsHPC User Forum Panel1 HPC User Forum Alternative Processor Panel Results 2008.
HPC User Forum Back End Compiler Panel SiCortex Perspective Kevin Harris Compiler Manager April 2009.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
1 The Portland Group, Inc. Brent Leback HPC User Forum, Broomfield, CO September 2009.
Experts in numerical algorithms and HPC services Compiler Requirements and Directions Rob Meyer September 10, 2009.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Parallel Portability and Heterogeneous programming Stefan Möhl, Co-founder, CSO, Mitrionics.
HPC F ORUM S EPTEMBER 8-10, 2009 Steve Rowan srowan at conveycomputer.com.
EU-Russia Call Dr. Panagiotis Tsarchopoulos Computing Systems ICT Programme European Commission.
Background Computer System Architectures Computer System Software.
HP-SEE TotalView Debugger Josip Jakić Scientific Computing Laboratory Institute of Physics Belgrade The HP-SEE initiative.
PERFORMANCE OF THE OPENMP AND MPI IMPLEMENTATIONS ON ULTRASPARC SYSTEM Abstract Programmers and developers interested in utilizing parallel programming.
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
These slides are based on the book:
DDC 2223 SYSTEM SOFTWARE DDC2223 SYSTEM SOFTWARE.
Engineering (Richard D. Braatz and Umberto Ravaioli)
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Dieter an Mey Center for Computing and Communication, RWTH Aachen University, Germany V 3.0.
Language Translation Compilation vs. interpretation.
John Levesque Director Cray Supercomputing Center of Excellence
MPI: Portable Parallel Programming for Scientific Computing
X10: Performance and Productivity at Scale
CE-105 Spring 2007 Engr. Faisal ur Rehman
CMPE419 Mobile Application Development
Lecture 5: GPU Compute Architecture
Many-core Software Development Platforms
A Verified DSL for MPC in
Reconfigurable Computing
Intel® Parallel Studio and Advisor
HPC User Forum 2012 Panel on Potential Disruptive Technologies Emerging Parallel Programming Approaches Guang R. Gao Founder ET International.
Parallel I/O System for Massively Parallel Processors
Template for IXPUG EMEA Ostrava, 2016
MPI-Message Passing Interface
Simulation at NASA for the Space Radiation Effort
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Compiler Back End Panel
Compiler Back End Panel
Compiler Front End Panel
Parallel Analytic Systems
Alternative Processor Panel Results 2008
A Domain Decomposition Parallel Implementation of an Elasto-viscoplasticCoupled elasto-plastic Fast Fourier Transform Micromechanical Solver with Spectral.
Allen D. Malony Computer & Information Science Department
Back End Compiler Panel
CS703 – Advanced Operating Systems
HPC User Forum: Back-End Compiler Technology Panel
Chapter 01: Introduction
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
CMPE419 Mobile Application Development
Big Data, Simulations and HPC Convergence
Presentation transcript:

Question 1 How are you going to provide language and/or library (or other?) support in Fortran, C/C++, or another language for massively parallel programming on loosely and tightly coupled cluster machines without requiring MPI, MP, or similar low level memory and thread synchronization? Any standards committees looking at this? Any ad-hoc committees looking at this?

Question 2 How are you going to provide good (not perfect!) performance optimization across the full gamut of HPC machines from cheap clustered pizza boxes through powerful high end systems? Is there going to be any dynamic optimization (different optimization paths generated at compile time and chosen depending on the scope of the problem at run time)?

Question 3 How are you going to support heterogeneous cores/nodes from a single compiled input stream?

Question 4 What productivity tools or languages besides the Fortran and C/C++ compilers are you going to provide to enable non-cluster experts or non-cluster-ready codes to utilize a cluster with minimal expertise and minimal intrusive code (this is in addition to compiler directives that can be added to the code)? Will these tools provide information to the compiler to help with its task? How are profilers going to operate to help the non-expert user figure out where their codes are utilizing resources?

Question 5 How are you going to support third-party, pre-compiled libraries (e.g. VNI, NAG) to implement on multi-core, many-core, and hybrid systems?