2 3 Parent Thread Fork Join Start End Child Threads Compute time Overhead.

Slides:



Advertisements
Similar presentations
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Advertisements

Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
Introduction to OpenMP Motivation Parallel machines are abundant  Servers are 2-8 way SMPs and more  Upcoming processors are multicore.
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations.
1 Friday, November 10, 2006 “ Programs for sale: Fast, Reliable, Cheap: choose two.” -Anonymous.
Programming with OpenMP* Intel Software College. Copyright © 2008, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
ME964 High Performance Computing for Engineering Applications “The competent programmer is fully aware of the strictly limited size of his own skull; therefore.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
INTEL CONFIDENTIAL OpenMP for Domain Decomposition Introduction to Parallel Programming – Part 5.
CS 470/570 Lecture 7 Dot Product Examples Odd-even transposition sort More OpenMP Directives.
INTEL CONFIDENTIAL OpenMP for Task Decomposition Introduction to Parallel Programming – Part 8.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP Instructors: Krste Asanovic & Vladimir Stojanovic.
Programming with OpenMP* Intel Software College. Copyright © 2006, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or.
CS240A, T. Yang, 2013 Modified from Demmel/Yelick’s and Mary Hall’s Slides 1 Parallel Programming with OpenMP.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.
ME964 High Performance Computing for Engineering Applications “The inside of a computer is as dumb as hell but it goes like mad!” Richard Feynman © Dan.
Instructor: Sagar Karandikar
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Programming with OpenMP* Intel Software College. Copyright © 2006, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or.
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Threaded Programming Lecture 4: Work sharing directives.
Introduction to OpenMP
09/09/2010CS4961 CS4961 Parallel Programming Lecture 6: Data Parallelism in OpenMP, cont. Introduction to Data Parallel Algorithms Mary Hall September.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22,
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Special Topics in Computer Engineering OpenMP* Essentials * Open Multi-Processing.
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
Programming with OpenMP*” Part II Intel Software College.
CS240A, T. Yang, Parallel Programming with OpenMP.
CS 110 Computer Architecture Lecture 20: Thread-Level Parallelism (TLP) and OpenMP Intro Instructor: Sören Schwertfeger School.
Heterogeneous Computing using openMP lecture 1 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
Programming with OpenMP* Intel Software College. Objectives Upon completion of this module you will be able to use OpenMP to:  implement data parallelism.
OpenMP – Part 2 * *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
B. Estrade, LSU – High Performance Computing Enablement Group OpenMP II B. Estrade.
Introduction to OpenMP
Shared Memory Parallelism - OpenMP
Lecture 5: Shared-memory Computing with Open MP
Shared-memory Programming
CS427 Multicore Architecture and Parallel Computing
Open[M]ulti[P]rocessing
Computer Engg, IIT(BHU)
Introduction to OpenMP
Shared-Memory Programming
Programming with OpenMP*
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
Exploiting Parallelism
Computer Science Department
Shared Memory Programming with OpenMP
CS4230 Parallel Programming Lecture 12: More Task Parallelism Mary Hall October 4, /04/2012 CS4230.
Programming with OpenMP*
Parallel Programming with OpenMP
Programming with Shared Memory Introduction to OpenMP
Introduction to OpenMP
OpenMP Parallel Programming
Parallel Programming with OPENMP
Presentation transcript:

2

3 Parent Thread Fork Join Start End Child Threads Compute time Overhead

4

5 Fork Join Start End Fork Join Start End Idle

6

8 Current spec is OpenMP Pages (combined C/C++ and Fortran)

9 Parallel Regions Master Thread

10

11

12

13

14 Automatically divides work among threads

15 #pragma omp parallel #pragma omp for Implicit barrier i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 i = 7 i = 8 i = 9 i = 10 i = 11 i = 12 // assume N=12 #pragma omp parallel #pragma omp for for(i = 1, i < N+1, i++) c[i] = a[i] + b[i];

16 #pragma omp parallel { #pragma omp for for (i=0;i< MAX; i++) { res[i] = huge(); } #pragma omp parallel for for (i=0;i< MAX; i++) { res[i] = huge(); }

17 void* work(float* c, int N) { float x, y; int i; #pragma omp parallel for private(x,y) for(i=0; i<N; i++) { x = a[i]; y = b[i]; c[i] = x + y; }

18

19 #pragma omp parallel for schedule (static, 8) for( int i = start; i <= end; i += 2 ) { if ( TestForPrime(i) ) gPrimesFound++; } Iterations are divided into chunks of 8 If start = 3, then first chunk is i ={3,5,7,9,11,13,15,17}

20

21

22 temp A, index, count temp A, index, count Which variables are shared and which variables are private? float A[10]; main () { integer index[10]; #pragma omp parallel { Work (index); } printf (“%d\n”, index[1]); } extern float A[10]; void Work (int *index) { float temp[10]; static integer count; } A, index, and count are shared by all threads, but temp is local to each thread

23 float dot_prod(float* a, float* b, int N) { float sum = 0.0; #pragma omp parallel for for(int i=0; i<N; i++) { sum += a[i] * b[i]; } return sum; } What is Wrong?

24

25 Value of area Thread AThread B Value of area Thread A Thread B Order of thread execution causes non determinant behavior in a data race

26 float dot_prod(float* a, float* b, int N) { float sum = 0.0; #pragma omp parallel for for(int i=0; i<N; i++) { #pragma omp critical sum += a[i] * b[i]; } return sum; }

27 float RES; #pragma omp parallel { float B; #pragma omp for for(int i=0; i<niters; i++){ B = big_job(i); #pragma omp critical (RES_lock) consum (B, RES); } } Threads wait their turn –at a time, only one calls consum() thereby protecting RES from race conditions Naming the critical construct RES_lock is optional Good Practice – Name all critical sections

28

29 #pragma omp parallel for reduction(+:sum) for(i=0; i<N; i++) { sum += a[i] * b[i]; }

30 OperandInitial Value +0 *1 -0 ^0 OperandInitial Value &~0 |0 &&1 ||0

(1+x 2 ) f(x) = X  4.0 (1+x 2 ) dx =  0 1 static long num_steps=100000; double step, pi; void main() { int i; double x, sum = 0.0; step = 1.0/(double) num_steps; for (i=0; i< num_steps; i++){ x = (i+0.5)*step; sum = sum + 4.0/(1.0 + x*x); } pi = step * sum; printf(“Pi = %f\n”,pi);}

32

33 #pragma omp parallel { DoManyThings(); #pragma omp single { ExchangeBoundaries(); } // threads wait here for single DoManyMoreThings(); }

34 #pragma omp parallel { DoManyThings(); #pragma omp master { // if not master skip to next stmt ExchangeBoundaries(); } DoManyMoreThings(); }

35

36 #pragma single nowait { [...] } #pragma omp for nowait for(...) {...}; #pragma omp for schedule(dynamic,1) nowait for(int i=0; i<n; i++) a[i] = bigFunc1(i); #pragma omp for schedule(dynamic,1) for(int j=0; j<m; j++) b[j] = bigFunc2(j);

37 #pragma omp parallel shared (A, B, C) { DoSomeWork(A,B); printf(“Processed A into B\n”); #pragma omp barrier DoSomeWork(B,C); printf(“Processed B into C\n”); }

38 #pragma omp parallel for shared(x, y, index, n) for (i = 0; i < n; i++) { #pragma omp atomic x[index[i]] += work1(i); y[i] += work2(i); }

39 a = alice(); b = bob(); s = boss(a, b); c = cy(); printf ("%6.2f\n", bigboss(s,c)); alice,bob, and cy can be computed in parallel alice bob boss bigboss cy

40

41 #pragma omp parallel sections { #pragma omp section /* Optional */ a = alice(); #pragma omp section b = bob(); #pragma omp section c = cy(); } s = boss(a, b); printf ("%6.2f\n", bigboss(s,c));

42 SerialParallel #pragma omp parallel sections { #pragma omp section phase1(); #pragma omp section phase2(); #pragma omp section phase3(); }

43

44 SerialParallel

45 A pool of 8 threads is created here #pragma omp parallel // assume 8 threads { #pragma omp single private(p) { … while (p) { #pragma omp task { processwork(p); } p = p->next; } One thread gets to execute the while loop The single “while loop” thread creates a task for each instance of processwork()

46 #pragma omp parallel { #pragma omp single { // block 1 node * p = head; while (p) { //block 2 #pragma omp task process(p); p = p->next; //block 3 }

Have potential to parallelize irregular patterns and recursive function calls #pragma omp parallel { #pragma omp single { // block 1 node * p = head; while (p) { //block 2 #pragma omp task process(p); p = p->next; //block 3 } Block1 Block 2 Task 1 Block 2 Task 2 Block 2 Task 3 Block3 Time Single Threaded Block1 Thr1 Thr2 Thr3 Thr4 Block 2 Task 2 Block 2 Task 1 Block 2 Task 3 Time Saved Idle Block3

48 Tasks are gauranteed to be complete: At thread or task barriers At the directive: #pragma omp barrier At the directive: #pragma omp taskwait

49 #pragma omp parallel { #pragma omp task foo(); #pragma omp barrier #pragma omp single { #pragma omp task bar(); } Multiple foo tasks created here – one for each thread All foo tasks guaranteed to be completed here One bar task created here bar task guaranteed to be completed here

50 CS 491 – Parallel and Distributed Computing

51 CS 491 – Parallel and Distributed Computing

52 CS 491 – Parallel and Distributed Computing while(p != NULL){ do_work(p->data); p = p->next; }

53

54 CS 491 – Parallel and Distributed Computing

55

56

57

58