Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.

Slides:



Advertisements
Similar presentations
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Advertisements

May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Parallel Programming Lecture Set 4 POSIX Threads Overview & OpenMP Johnnie Baker February 2,
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
INTEL CONFIDENTIAL OpenMP for Domain Decomposition Introduction to Parallel Programming – Part 5.
CS 470/570 Lecture 7 Dot Product Examples Odd-even transposition sort More OpenMP Directives.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
CS240A, T. Yang, 2013 Modified from Demmel/Yelick’s and Mary Hall’s Slides 1 Parallel Programming with OpenMP.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Shared Memory Parallelization Outline What is shared memory parallelization? OpenMP Fractal Example False Sharing Variable scoping Examples on sharing.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Parallel Programming in Java with Shared Memory Directives.
2 3 Parent Thread Fork Join Start End Child Threads Compute time Overhead.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
Introduction to OpenMP. OpenMP Introduction Credits:
OpenMP: Open specifications for Multi-Processing What is OpenMP? Join\Fork model Join\Fork model Variables Variables Explicit parallelism Explicit parallelism.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
09/06/2011CS4961 CS4961 Parallel Programming Lecture 5: Introduction to Threads (Pthreads and OpenMP) Mary Hall September 6, 2011.
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Hybrid MPI and OpenMP Parallel Programming
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Threaded Programming Lecture 4: Work sharing directives.
09/08/2011CS4961 CS4961 Parallel Programming Lecture 6: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 8, 2011.
09/17/2010CS4961 CS4961 Parallel Programming Lecture 8: Introduction to Threads and Data Parallelism in OpenMP Mary Hall September 17, 2009.
Introduction to OpenMP
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
09/09/2010CS4961 CS4961 Parallel Programming Lecture 6: Data Parallelism in OpenMP, cont. Introduction to Data Parallel Algorithms Mary Hall September.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22,
Introduction to Pragnesh Patel 1 NICS CSURE th June 2015.
CS/EE 217 GPU Architecture and Parallel Programming Lecture 23: Introduction to OpenACC.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Special Topics in Computer Engineering OpenMP* Essentials * Open Multi-Processing.
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
CS240A, T. Yang, Parallel Programming with OpenMP.
Heterogeneous Computing using openMP lecture 1 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
OpenMP – Part 2 * *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
OpenMP Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to OpenMP
Lecture 5: Shared-memory Computing with Open MP
SHARED MEMORY PROGRAMMING WITH OpenMP
CS427 Multicore Architecture and Parallel Computing
Open[M]ulti[P]rocessing
Computer Engg, IIT(BHU)
Introduction to OpenMP
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
CS4961 Parallel Programming Lecture 5: Data and Task Parallelism, cont
Multi-core CPU Computing Straightforward with OpenMP
Parallel Programming with OpenMP
CS4961 Parallel Programming Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 4, /04/2012 CS4230.
Introduction to High Performance Computing Lecture 20
Programming with Shared Memory Introduction to OpenMP
Introduction to OpenMP
Parallel Programming with OPENMP
Presentation transcript:

Lecture 5: Shared-memory Computing with Open MP

Shared Memory Computing

Getting Started: Example 1 (Hello) #include "stdafx.h"

Getting Started: Example 1 (Hello) Set the compiler option in the Visual Studio development environment 1.Open the project's Property Pages dialog box. 2.Expand the Configuration Properties node. 3.Expand the C/C++ node. 4.Select the Language property page. 5.Modify the OpenMP Support property. Set the arguments to the main function parameters in the Visual Studio development environment (e.g., int main(int argc, char* argv[]) 1.In Project Properties (click Project or project name on solution Explore), expand Configuration Properties, and then click Debugging. 2.In the pane on the right, in the textbox to the right of Command Arguments, type the arguments to main you want to use. For example, one two

In case the compiler doesn’t support OpenMP CS4230 # include #ifdef _OPENMP # include #endif 08/30/20125

In case the compiler doesn’t support OpenMP CS4230 # ifdef _OPENMP int my_rank = omp_get_thread_num ( ); int thread_count = omp_get_num_threads ( ); # e l s e int my_rank = 0; int thread_count = 1; # endif 08/30/20126

OpenMP: Prevailing Shared Memory Programming Approach Model for shared-memory parallel programming Portable across shared-memory architectures Scalable (on shared-memory platforms) Incremental parallelization -Parallelize individual computations in a program while leaving the rest of the program sequential Compiler based -Compiler generates thread program and synchronization Extensions to existing programming languages (Fortran, C and C++) -mainly by directives -a few library routines See 09/06/2011CS4961

OpenMP Execution Model 09/04/2012CS4230 fork join

OpenMP uses Pragmas Pragmas are special preprocessor instructions. Typically added to a system to allow behaviors that aren’t part of the basic C specification. Compilers that don’t support the pragmas ignore them. The interpretation of OpenMP pragmas -They modify the statement immediately following the pragma -This could be a compound statement such as a loop #pragma omp … 08/30/2012CS42309

OpenMP parallel region construct Block of code to be executed by multiple threads in parallel Each thread executes the same code redundantly (SPMD) -Work within work-sharing constructs is distributed among the threads in a team Example with C/C++ syntax #pragma omp parallel [ clause [ clause ]... ] new-line structured-block clause can include the following: private (list) shared (list) 09/04/2012CS4230

Programming Model – Data Sharing Parallel programs often employ two types of data -Shared data, visible to all threads, similarly named -Private data, visible to a single thread (often stack-allocated) OpenMP: shared variables are shared private variables are private Default is shared Loop index is private // shared, globals int bigdata[1024]; void* foo(void* bar) { // private, stack int tid; /* Calculation goes here */ } int bigdata[1024]; void* foo(void* bar) { int tid; #pragma omp parallel \ shared ( bigdata ) \ private ( tid ) { /* Calc. here */ }

OpenMP critical directive Enclosed code – executed by all threads, but – restricted to only one thread at a time #pragma omp critical [ ( name ) ] new-line structured-block A thread waits at the beginning of a critical region until no other thread in the team is executing a critical region with the same name. All unnamed critical directives map to the same unspecified name. 09/04/2012CS4230

Example 2: calculate the area under a curve CS4230 The trapezoidal rule Serial program Parallel program for my_rank my_rank = 0 my_rank = 1

CS4230 Example 2: calculate the area under a curve using critical directive

CS4230 Example 2: calculate the area under a curve using critical directive

OpenMp Reductions OpenMP has reduce operation sum = 0; #pragma omp parallel for reduction(+:sum) for (i=0; i < 100; i++) { sum += array[i]; } Reduce ops and init() values (C and C++): + 0 bitwise & ~0 logical & bitwise | 0 logical | 0 * 1 bitwise ^ 0 FORTRAN also supports min and max reductions 09/04/2012CS4230

Example 2: calculate the area under a curve using reduction clause Local_trap

CS423009/04/2012 Example 2: calculate the area under a curve using loop

Example 3: calculate π #include "stdafx.h" #ifdef _OPENMP #include #endif #include int main(int argc, char* argv[]) { double global_result = 0.0; volatile DWORD dwStart; int n = ; printf("numberInterval %d \n", n); int numThreads = strtol(argv[1], NULL, 10); dwStart = GetTickCount(); # pragma omp parallel num_threads(numThreads) PI(0, 1, n, &global_result); printf("number of threads %d \n", numThreads); printf("Pi = %f \n", global_result); printf_s("milliseconds %d \n", GetTickCount() - dwStart); }

Example 3: calculate π void PI(double a, double b, int numIntervals, double* global_result_p) { int i; double x, my_result, sum = 0.0, interval, local_a, local_b, local_numIntervals; int myThread = omp_get_thread_num(); int numThreads = omp_get_num_threads(); interval = (b-a)/ (double) numIntervals; local_numIntervals = numIntervals/numThreads; local_a = a + myThread*local_numIntervals*interval; local_b = local_a + local_numIntervals*interval; sum = 0.0; for (i = 0; i < local_numIntervals; i++) { x = local_a + i*interval; sum = sum / (1.0 + x*x); }; my_result = interval * sum; # pragma om critical *global_result_p += my_result; }

A Programmer’s View of OpenMP OpenMP is a portable, threaded, shared-memory programming specification with “light” syntax -Exact behavior depends on OpenMP implementation! -Requires compiler support (C/C++ or Fortran) OpenMP will: -Allow a programmer to separate a program into serial regions and parallel regions, rather than concurrently-executing threads. -Hide stack management -Provide synchronization constructs OpenMP will not: -Parallelize automatically -Guarantee speedup -Provide freedom from data races 08/30/2012CS423021

OpenMP runtime library, Query Functions omp_get_num_threads : Returns the number of threads currently in the team executing the parallel region from which it is called int omp_get_num_threads(void); omp_get_thread_num: Returns the thread number, within the team, that lies between 0 and omp_get_num_threads(), inclusive. The master thread of the team is thread 0 int omp_get_thread_num(void); 08/30/2012CS423022

Impact of Scheduling Decision Load balance -Same work in each iteration? -Processors working at same speed? Scheduling overhead -Static decisions are cheap because they require no run-time coordination -Dynamic decisions have overhead that is impacted by complexity and frequency of decisions Data locality -Particularly within cache lines for small chunk sizes -Also impacts data reuse on same processor 09/04/2012CS4230

09/04/2012CS4230 Summary of Lecture OpenMP, data-parallel constructs only -Task-parallel constructs later What’s good? -Small changes are required to produce a parallel program from sequential (parallel formulation) -Avoid having to express low-level mapping details -Portable and scalable, correct on 1 processor What is missing? -Not completely natural if want to write a parallel code from scratch -Not always possible to express certain common parallel constructs -Locality management -Control of performance

Exercise (1)Read example Compile them and run them, respectively. (2) Revise the programs in (1) using reduction clause and using loop, respectively. (3) For each value of N obtained in (2), get the running time when the number of threads is 1,2,4,8,16, respectively, find the speed-up rate (notice that the ideal rate is, investigate the change in the speed-up rates, and discuss the reasons. (4) Suppose matrix A and matrix B are saved in two dimension arrays. Write OpenMP programs for A+B and A*B, respectively. Run them using different number of threads, and find the speed-up rate.