Shared-Memory Programming

Slides:



Advertisements
Similar presentations
Implementing Domain Decompositions Intel Software College Introduction to Parallel Programming – Part 3.
Advertisements

NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
Mohsan Jameel Department of Computing NUST School of Electrical Engineering and Computer Science 1.
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations also see:
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
1 OpenMP—An API for Shared Memory Programming Slides are based on:
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations.
1 Friday, November 10, 2006 “ Programs for sale: Fast, Reliable, Cheap: choose two.” -Anonymous.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
Parallel Programming in C with MPI and OpenMP
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
INTEL CONFIDENTIAL OpenMP for Domain Decomposition Introduction to Parallel Programming – Part 5.
Programming with Shared Memory Introduction to OpenMP
Shared Memory Parallelization Outline What is shared memory parallelization? OpenMP Fractal Example False Sharing Variable scoping Examples on sharing.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Steve Lantz Computing and Information Science In-Class “Guerrilla” Development of MPI Examples Week 5 Lecture Notes.
Parallel Programming in Java with Shared Memory Directives.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
ECE 1747 Parallel Programming Shared Memory: OpenMP Environment and Synchronization.
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
OpenMP Martin Kruliš Jiří Dokulil. OpenMP OpenMP Architecture Review Board Compaq, HP, Intel, IBM, KAI, SGI, SUN, U.S. Department of Energy,…
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Introduction to OpenMP
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
MPI and OpenMP.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Special Topics in Computer Engineering OpenMP* Essentials * Open Multi-Processing.
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
CS240A, T. Yang, Parallel Programming with OpenMP.
B. Estrade, LSU – High Performance Computing Enablement Group OpenMP II B. Estrade.
Parallel Programming in C with MPI and OpenMP
Introduction to OpenMP
Martin Kruliš Jiří Dokulil
Shared Memory Parallelism - OpenMP
Lecture 5: Shared-memory Computing with Open MP
Parallel Programming in C with MPI and OpenMP
Shared-memory Programming
CS427 Multicore Architecture and Parallel Computing
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Improving Barrier Performance Dr. Xiao Qin.
Loop Parallelism and OpenMP CS433 Spring 2001
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing A bug in the rwlock program Dr. Xiao Qin.
Open[M]ulti[P]rocessing
Computer Engg, IIT(BHU)
Introduction to OpenMP
SHARED MEMORY PROGRAMMING WITH OpenMP
Computer Science Department
Parallel Programming with OpenMP
Programming with Shared Memory Introduction to OpenMP
Introduction to OpenMP
OpenMP Martin Kruliš.
Shared-Memory Paradigm & OpenMP
Parallel Programming with OPENMP
Presentation transcript:

Shared-Memory Programming Chapter 17 Shared-Memory Programming

Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It consists of a set of compiler directives and a library of support functions. Fork/join parallelism Incremental parallelization – the process of transforming a sequential program into a parallel program one block of code at a time.

OpenMP compiler directives parallel for parallel for sections parallel sections critical single

OpenMP functions int omp_get_num_procs (void) int omp_get_num_threads (void) int omp_get_thread_num (void) void omp_set_num_threads (int t)

for (i=first; I < size; I +=prime) marked [i] = 1; Parallel for Loops for (i=first; I < size; I +=prime) marked [i] = 1;

parallel for Pragma Pragma: a compiler directive in C or C++ is called a pragma. It is short for “pragmatic information”. Syntax: #pragma omp <rest of pragma> e.g. #pragma omp parallel for for (i=first; I < size; I +=prime) marked [i] = 1;

Execution context Every thread has its own execution context: an address space containing all of variables the thread may access. The execution context includes static variables, dynamically allocated data structures in the heap, and variables on the run-time stack. Shared variable Private variable

Declaring Private Variables private (<variable list>) # pragma omp parallel for private (j) for (i=0; I <= BLOCK_SIZE(id,p,n); i++) for (j=0; j < n; j++) a[i][j] = MIN (a[i][j], a[i][k] + tmp[j]);

firstprivate Clause x[0] = complex_function(); #pragma omp parallel for private (j) firstprivate (x) for (i =0; i < n; i++) for (j = 1; j < 4; j++) x[j] = g(i, x[j-1]); answer [i] = x[1] – x[3];

lastprivate Clause #pragma omp parallel for private (j) lastprivate (x) For (I = 0; I < n; i++) { x[0] = 1.0; for (j = 1; j < 4; j++) x[j] = x[j-1] * (I + 1); sum_of_powers[i] = x[0] + x[1] + x[2] + x[3]; } N_cubed = x [3];

Critical Sections #pragma omp parallel for private (x) for (i = 0; i < n; i++) { x = (i+0.5)/n; area += 4.0/(1.0 + x*x); /* Race Condition! */ } pi = area/n;

#pragma omp parallel for private (x) for (i = 0; i < n; i++) { x = (i+0.5)/n; #pragma omp critical area += 4.0/(1.0 + x*x); } pi = area/n;

Reductions Syntax: reduction (<op> : <variable>) #pragma omp parallel for private (x) reduction (+:area) for (i = 0; i < n; i++) { x = (i+0.5)/n; area += 4.0/(1.0 + x*x); } pi = area/n;

Performance Improvement Inverting Loops e.g. for (i = 1; i < m; i++) for (j = 0; j < n; j++) a[i][j] = 2 * a [i-1][j]; #pragma parallel for private (i)

Conditionally Executing Loops #pragma omp parallel for private (x) reduction (+:area) if (n > 5000) for (i = 0; i < n; i++) { x = (i+0.5)/n; area += 4.0/(1.0 + x*x); } pi = area/n;

Scheduling Loops Syntax: Schedule (<type> [, <chunk>]) Schedule (static): A static allocation of about n/t contiguous iterations to each thread. Schedule (static, C): An interleaved allocation of chunks to tasks. Each chunk contains C contiguous iterations.

Schedule (dynamic) : Iterations are dynamically allocated, one at a time, to threads. Schedule (dynamic, C) : A dynamic allocation of C iterations at a time to the tasks. Schedule (guided, C) : A dynamic allocation of iterations to tasks using the guided self-scheduling heuristic. Guided self-scheduling begins by allocating a large chunk size to each task and responds to further requests for chunks by allocating chunks of decreasing size. The size of the chunks decreases exponentially to a minimum chunk size of C.

Schedule(guided): Guided self-scheduling with a minimum chunk size of 1. Schedule(runtime): The schedule type is chosen at run-time based on the value of environment variable OMP_SCHEDULE. e.g. setenv OMP_SCHEDULE “static, 1” would set the run-time schedule to be an interleaved allocation.

More General Data Parallelism

parallel Pragama

for Pragama

single Pragama

nowait Clause

Functional Parallelism e.g. v = alpha(); w = beta(); x = gamma (v, w); y = delta (); printf (“%6.2f\n”, epsilon(x,y));

#pragma omp parallel sections { #pragma omp section /* This pragma optional */ v = alpha(); #pragma omp section w = beta(); y = delta (); } x = gamma (v, w); printf (“%6.2f\n”, epsilon(x,y));

#pragma omp parallel { #pragma omp sections #pragma omp section v = alpha(); w = beta(); } x = gamma(v,w); y = delta(); printf (“%6.2f\n”, epsilon(x,y));