9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22, 2011 1.

Slides:



Advertisements
Similar presentations
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Advertisements

May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Indian Institute of Science Bangalore, India भारतीय विज्ञान संस्थान बंगलौर, भारत Supercomputer Education and Research Centre (SERC) Adapted from: o “MPI-Message.
Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
Mohsan Jameel Department of Computing NUST School of Electrical Engineering and Computer Science 1.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Introduction to OpenMP For a more detailed tutorial see: Look at the presentations.
Games at Bolton OpenMP Techniques Andrew Williams
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
INTEL CONFIDENTIAL OpenMP for Domain Decomposition Introduction to Parallel Programming – Part 5.
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines.
10/04/2011CS4961 CS4961 Parallel Programming Lecture 12: Advanced Synchronization (Pthreads) Mary Hall October 4, 2011.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Programming with Shared Memory Introduction to OpenMP
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Parallel Programming in Java with Shared Memory Directives.
UNIT -6 PROGRAMMING SHARED ADDRESS SPACE PLATFORMS THREAD BASICS PREPARED BY:-H.M.PATEL.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
OpenMP: Open specifications for Multi-Processing What is OpenMP? Join\Fork model Join\Fork model Variables Variables Explicit parallelism Explicit parallelism.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
10/07/2010CS4961 CS4961 Parallel Programming Lecture 14: Reasoning about Performance Mary Hall October 7,
OpenMP Martin Kruliš Jiří Dokulil. OpenMP OpenMP Architecture Review Board Compaq, HP, Intel, IBM, KAI, SGI, SUN, U.S. Department of Energy,…
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
09/08/2011CS4961 CS4961 Parallel Programming Lecture 6: More OpenMP, Introduction to Data Parallel Algorithms Mary Hall September 8, 2011.
Design Issues. How to parallelize  Task decomposition  Data decomposition  Dataflow decomposition Jaruloj Chongstitvatana 2 Parallel Programming: Parallelization.
09/24/2010CS4961 CS4961 Parallel Programming Lecture 10: Thread Building Blocks Mary Hall September 24,
10/02/2012CS4230 CS4230 Parallel Programming Lecture 11: Breaking Dependences and Task Parallel Algorithms Mary Hall October 2,
Introduction to OpenMP
09/07/2012CS4230 CS4230 Parallel Programming Lecture 7: Loop Scheduling cont., and Data Dependences Mary Hall September 7,
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Threaded Programming Lecture 2: Introduction to OpenMP.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
10/05/2010CS4961 CS4961 Parallel Programming Lecture 13: Task Parallelism in OpenMP Mary Hall October 5,
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
COMP7330/7336 Advanced Parallel and Distributed Computing OpenMP: Programming Model Dr. Xiao Qin Auburn University
OpenMP – Part 2 * *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
B. Estrade, LSU – High Performance Computing Enablement Group OpenMP II B. Estrade.
Introduction to OpenMP
Shared Memory Parallelism - OpenMP
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Improving Barrier Performance Dr. Xiao Qin.
Loop Parallelism and OpenMP CS433 Spring 2001
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing A bug in the rwlock program Dr. Xiao Qin.
Computer Engg, IIT(BHU)
Introduction to OpenMP
Shared-Memory Programming
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
Computer Science Department
CS4230 Parallel Programming Lecture 12: More Task Parallelism Mary Hall October 4, /04/2012 CS4230.
Multi-core CPU Computing Straightforward with OpenMP
Introduction to High Performance Computing Lecture 20
Programming with Shared Memory Introduction to OpenMP
Introduction to OpenMP
OpenMP Parallel Programming
WorkSharing, Schedule, Synchronization and OMP best practices
Presentation transcript:

9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22,

Administrative Homework 3 will be posted later today and due Thursday before class -We’ll go over the assignment on Tuesday Preview of Homework (4 problems) -Determine dependences in a set of computations -Describe safety requirements of reordering transformations -Use loop transformations to improve locality of simple examples -Use a collection of loop transformations to improve locality of a complex example Experiment: Break at halfway point 9/22/2011CS49612

Today’s Lecture Go over programming assignment A few OpenMP constructs we haven’t discussed Discussion of Task Parallelism in Open MP 2.x and 3.0 Sources for Lecture: -Textbook, Chapter 5.8 -OpenMP Tutorial by Ruud van der Pas -Recent OpenMP Tutorial SC08.pdf -OpenMP 3.0 specification (May 2008): documents/spec30.pdf 9/22/2011CS49613

OpenMP Data Parallel Summary Work sharing -parallel, parallel for, TBD -scheduling directives: static(CHUNK), dynamic(), guided() Data sharing -shared, private, reduction Environment variables -OMP_NUM_THREADS, OMP_SET_DYNAMIC, OMP_NESTED, OMP_SCHEDULE Library -E.g., omp_get_num_threads(), omp_get_thread_num() 9/22/2011CS49614

Conditional Parallelization if (scalar expression) 9/22/2011CS49615 #pragma omp parallel if (n > threshold) \ shared(n,x,y) private(i) { #pragma omp for for (i=0; i<n; i++) x[i] += y[i]; } /*-- End of parallel region --*/ Review: parallel for private shared Only execute in parallel if expression evaluates to true Otherwise, execute serially

SINGLE and MASTER constructs Only one thread in team executes code enclosed Useful for things like I/O or initialization No implicit barrier on entry or exit Similarly, only master executes code 9/22/2011CS49616 #pragma omp single { } #pragma omp master { }

Also, more control on synchronization #pragma omp parallel shared(A,B,C) { tid = omp_get_thread_num(); A[tid] = big_calc1(tid); #pragma omp barrier #pragma omp for for (i=0; i<N; i++) C[i] = big_calc2(tid); #pragma omp for nowait for (i=0; i<N; i++) B[i] = big_calc3(tid); A[tid] = big_calc4(tid); } 9/22/2011CS49617

General: Task Parallelism Recall definition: -A task parallel computation is one in which parallelism is applied by performing distinct computations – or tasks – at the same time. Since the number of tasks is fixed, the parallelism is not scalable. OpenMP support for task parallelism -Parallel sections: different threads execute different code -Tasks (NEW): tasks are created and executed at separate times Common use of task parallelism = Producer/consumer -A producer creates work that is then processed by the consumer -You can think of this as a form of pipelining, like in an assembly line -The “producer” writes data to a FIFO queue (or similar) to be accessed by the “consumer” 9/22/2011CS49618

Simple: OpenMP sections directive #pragma omp parallel { #pragma omp sections #pragma omp section {{ a=...; b=...; } #pragma omp section { c=...; d=...; } #pragma omp section { e=...; f=...; } #pragma omp section { g=...; h=...; } } /*omp end sections*/ } /*omp end parallel*/ 9/22/20119CS4961

Parallel Sections, Example 9/22/2011CS #pragma omp parallel shared(n,a,b,c,d) private(i) { #pragma omp sections nowait { #pragma omp section for (i=0; i<n; i++) d[i] = 1.0/c[i]; #pragma omp section for (i=0; i<n-1; i++) b[i] = (a[i] + a[i+1])/2; } /*-- End of sections --*/ } /*-- End of parallel region

Tasks in OpenMP 3.0 A task has -Code to execute -A data environment (shared, private, reduction) -An assigned thread that executes the code and uses the data Two activities: packaging and execution -Each encountering thread packages a new instance of a task -Some thread in the team executes the thread at a later time 9/22/2011CS496111

Simple Producer-Consumer Example // PRODUCER: initialize A with random data void fill_rand(int nval, double *A) { for (i=0; i<nval; i++) A[i] = (double) rand()/ ; } // CONSUMER: Sum the data in A double Sum_array(int nval, double *A) { double sum = 0.0; for (i=0; i<nval; i++) sum = sum + A[i]; return sum; } 9/22/2011CS496112

Key Issues in Producer-Consumer Parallelism Producer needs to tell consumer that the data is ready Consumer needs to wait until data is ready Producer and consumer need a way to communicate data -output of producer is input to consumer Producer and consumer often communicate through First-in-first-out (FIFO) queue 9/22/2011CS496113

One Solution to Read/Write a FIFO The FIFO is in global memory and is shared between the parallel threads How do you make sure the data is updated? Need a construct to guarantee consistent view of memory -Flush: make sure data is written all the way back to global memory 9/22/2011CS Example: Double A; A = compute(); Flush(A);

Solution to Producer/Consumer flag = 0; #pragma omp parallel { #pragma omp section { fillrand(N,A); #pragma omp flush flag = 1; #pragma omp flush(flag) } #pragma omp section { while (!flag) #pragma omp flush(flag) #pragma omp flush sum = sum_array(N,A); } 9/22/2011CS496115

Another Example from Textbook Implement Message-Passing on a Shared-Memory System A FIFO queue holds messages A thread has explicit functions to Send and Receive -Send a message by enqueuing on a queue in shared memory -Receive a message by grabbing from queue -Ensure safe access 9/22/2011CS496116

Message-Passing Copyright © 2010, Elsevier Inc. All rights Reserved

Sending Messages Copyright © 2010, Elsevier Inc. All rights Reserved Use synchronization mechanisms to update FIFO “Flush” happens implicitly What is the implementation of Enqueue?

Receiving Messages Copyright © 2010, Elsevier Inc. All rights Reserved This thread is the only one to dequeue its messages. Other threads may only add more messages. Messages added to end and removed from front. Therefore, only if we are on the last entry is synchronization needed.

Termination Detection Copyright © 2010, Elsevier Inc. All rights Reserved each thread increments this after completing its for loop More synchronization needed on “done_sending”

Task Example: Linked List Traversal How to express with parallel for? -Must have fixed number of iterations -Loop-invariant loop condition and no early exits 9/22/2011CS while(my_pointer) { (void) do_independent_work (my_pointer); my_pointer = my_pointer->next ; } // End of while loop

OpenMP 3.0: Tasks! 9/22/2011CS my_pointer = listhead; #pragma omp parallel { #pragma omp single nowait { while(my_pointer) { #pragma omp task firstprivate(my_pointer) { (void) do_independent_work (my_pointer); } my_pointer = my_pointer->next ; } } // End of single - no implied barrier (nowait) } // End of parallel region - implied barrier here firstprivate = private and copy initial value from global variable lastprivate = private and copy back final value to global variable

Summary Completed coverage of OpenMP -Locks -Conditional execution -Single/Master -Task parallelism -Pre-3.0: parallel sections -OpenMP 3.0: tasks Coming soon: -OpenMP programming assignment, mixed task and data parallel computation 9/22/2011CS496123