Languages for programming parallel machines The range of programming models for parallel machines is wider than that of today’s uniprocessors. This range.

Slides:



Advertisements
Similar presentations
CSC 4181 Compiler Construction Code Generation & Optimization.
Advertisements

1 Lawrence Livermore National Laboratory By Chunhua (Leo) Liao, Stephen Guzik, Dan Quinlan A node-level programming model framework for exascale computing*
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
Introductory Courses in High Performance Computing at Illinois David Padua.
Parallelization Technology v 0.2 Parallel-Developers Discussion 6/29/11.
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
Introduction CS 524 – High-Performance Computing.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Code Shape III Booleans, Relationals, & Control flow Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled.
Compilation Techniques for Multimedia Processors Andreas Krall and Sylvain Lelait Technische Universitat Wien.
Telescoping Languages: A Compiler Strategy for Implementation of High-Level Domain-Specific Programming Systems Ken Kennedy Rice University.
High Performance Fortran (HPF) Source: Chapter 7 of "Designing and building parallel programs“ (Ian Foster, 1995)
Reasons to study concepts of PL
1 Reusability and Portability Xiaojun Qi. 2 Reuse Concepts Reuse is the use of components of one product to facilitate the development of a different.
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
SWE Introduction to Software Engineering
CS 524 – High- Performance Computing Outline. CS High-Performance Computing (Wi 2003/2004) - Asim LUMS2 Description (1) Introduction to.
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
High Performance Fortran (HPF) Source: Chapter 7 of "Designing and building parallel programs“ (Ian Foster, 1995)
1 Programming Languages Translation  Lecture Objectives:  Be able to list and explain five features of the Java programming language.  Be able to explain.
Tile Reduction: the first step towards tile aware parallelization in OpenMP Ge Gan Department of Electrical and Computer Engineering Univ. of Delaware.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
FIGURE 1-1 A Computer System
Fortran Background – History and Motivation Shirley Moore CPS5401 Fall September 3,
Systems Software Operating Systems.
Overview Logistics Last lecture Today HW5 due today
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
Language Evaluation Criteria
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Software – Applications software and programming languages
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
The Role of Programming Languages Chapter 1: Programming Languages: Concepts and Constructs by Ravi Sethi.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
CS-2710 Computer Organization Dr. Mark L. Hornick web: faculty-web.msoe.edu/hornick – CS-2710 info syllabus, homework, labs… –
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Slides Courtesy Michael J. Quinn Parallel Programming in C.
Visual BASIC 1 Introduction
Software – Applications software and programming languages.
Hybrid MPI and OpenMP Parallel Programming
High Performance Fortran (HPF) Source: Chapter 7 of "Designing and building parallel programs“ (Ian Foster, 1995)
JAVA AND MATRIX COMPUTATION
Computer Programming 2 Why do we study Java….. Java is Simple It has none of the following: operator overloading, header files, pre- processor, pointer.
Programming Domains 1.Scientific Applications Typically, scientific applications have simple data structures but require large numbers of floating-point.
Debugging parallel programs. Breakpoint debugging Probably the most widely familiar method of debugging programs is breakpoint debugging. In this method,
Theory of Programming Languages Introduction. What is a Programming Language? John von Neumann (1940’s) –Stored program concept –CPU actions determined.
CSE:141 Introduction to Programming Faculty of Computer Science, IBA BS-I (Spring 2010) Lecture 2.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Compiler Optimizations ECE 454 Computer Systems Programming Topics: The Role of the Compiler Common Compiler (Automatic) Code Optimizations Cristiana Amza.
JL MONGE - Chimere software evolution 1 Chimere software evolution How to make a code : –Obscure –Less efficient –Usable by nobody but its author.
A Common Machine Language for Communication-Exposed Architectures Bill Thies, Michal Karczmarek, Michael Gordon, David Maze and Saman Amarasinghe MIT Laboratory.
 Computer Languages Computer Languages  Machine Language Machine Language  Assembly Language Assembly Language  High Level Language High Level Language.
Compilers as Collaborators and Competitors of High-Level Specification Systems David Padua University of Illinois at Urbana-Champaign.
Euro-Par, 2006 ICS 2009 A Translation System for Enabling Data Mining Applications on GPUs Wenjing Ma Gagan Agrawal The Ohio State University ICS 2009.
The History of Programming Languages The ENIAC (Electronic Numerical Integrator and Calculator) completed in 1945, was one of the first computers that.
Exploiting Computing Power of GPU for Data Mining Application Wenjing Ma, Leonid Glimcher, Gagan Agrawal.
AUTO-GC: Automatic Translation of Data Mining Applications to GPU Clusters Wenjing Ma Gagan Agrawal The Ohio State University.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Learning A Better Compiler Predicting Unroll Factors using Supervised Classification And Integrating CPU and L2 Cache Voltage Scaling using Machine Learning.
Reflections on Dynamic Languages and Parallelism David Padua University of Illinois at Urbana-Champaign 1.
What is a Computer An electronic, digital device that stores and processes information. A machine that accepts input, processes it according to specified.
A LECTURE NOTE. Introduction to Programming languages.
Computer Engg, IIT(BHU)
Programming Languages
CSCE569 Parallel Computing
Code Shape III Booleans, Relationals, & Control flow
Background and Motivation
Relations, Domain and Range
Type Topic in here! Created by Educational Technology Network
Presentation transcript:

Languages for programming parallel machines The range of programming models for parallel machines is wider than that of today’s uniprocessors. This range will widen even further. Domain-Specific Languages. Very High-Level (Simulink?) – Implicit parallelism High-Level (Plain MATLAB, Mathematica) General Purpose High-Level Languages Explicit Parallelism  Message-passing  Shared-Memory  Vector  … ? Implicit Parallelism – If we ever figure out how to do it. Assembly Language

Two of the things we don’t understand well Parallel programming constructs for symbolic computing. Symbolic computing will continue to dominate. Would the same constructs we use in OpenMP and MPI for numerical computations suffice ? Performance portability Also difficult to achieve in the sequential domain: ATLAS/FFTW Manual unrolling of loops in some popular high-performance applications. Assembly language programming. But more difficult in the parallel domain. Language features may help, but the impact of languages on effective compiling is even less well understood.

A conjecture Parallel programming is and will forever be harder than sequential programming. Things are much worse now and may become better in the future but parallel programming will never be as simple as sequential programming. Developers are going to avoid parallel programming as long as they can.