Download presentation
Presentation is loading. Please wait.
Published byMaximilian O’Connor’ Modified over 9 years ago
1
INTEL CONFIDENTIAL Reducing Parallel Overhead Introduction to Parallel Programming – Part 12
2
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Review & Objectives Previously: Use loop fusion, loop fission, and loop inversion to create or improve opportunities for parallel execution Explain why it can be difficult both to optimize load balancing and maximize locality At the end of this part you should be able to: Explain the pros and cons of static versus dynamic loop scheduling Explain the different OpenMP schedule clauses and the situations each one is best suited for 2
3
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Reducing Parallel Overhead Loop scheduling Replicating work 3
4
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Example for (i = 0; i < 12; i++) for (j = 0; j <= i; j++) a[i][j] =...; 4
5
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Example #pragma omp parallel for for (i = 0; i < 12; i++) for (j = 0; j <= i; j++) a[i][j] =...; 5 How are the iterations divided among threads?
6
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Example #pragma omp parallel for for (i = 0; i < 12; i++) for (j = 0; j <= i; j++) a[i][j] =...; 6 Typically, the iterations are divided by the number of threads and assigned as chunks to a thread
7
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Loop schedule: how loop iterations are assigned to threads Static schedule: iterations assigned to threads before execution of loop Dynamic schedule: iterations assigned to threads during execution of loop The OpenMP schedule clause affects how loop iterations are mapped onto threads 7
8
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. The schedule clause schedule(static [,chunk]) Blocks of iterations of size “chunk” to threads Round robin distribution Low overhead, may cause load imbalance Best used for predictable and similar work per iteration 8
9
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Example #pragma omp parallel for schedule(static, 2) for (i = 0; i < 12; i++) for (j = 0; j <= i; j++) a[i][j] =...; 9
10
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. The schedule clause schedule(dynamic[,chunk]) Threads grab “chunk” iterations When done with iterations, thread requests next set Higher threading overhead, can reduce load imbalance Best used for unpredictable or highly variable work per iteration 10
11
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Example #pragma omp parallel for schedule(dynamic, 2) for (i = 0; i < 12; i++) for (j = 0; j <= i; j++) a[i][j] =...; 11
12
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. The schedule clause schedule(guided[,chunk]) Dynamic schedule starting with large block Size of the blocks shrink; no smaller than “chunk” Best used as a special case of dynamic to reduce scheduling overhead when the computation gets progressively more time consuming 12
13
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Loop Scheduling Example #pragma omp parallel for schedule(guided) for (i = 0; i < 12; i++) for (j = 0; j <= i; j++) a[i][j] =...; 13
14
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Replicate Work Every thread interaction has a cost Example: Barrier synchronization Sometimes it’s faster for threads to replicate work than to go through a barrier synchronization 14
15
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. Before Work Replication for (i = 0; i < N; i++) a[i] = foo(i); x = a[0] / a[N-1]; for (i = 0; i < N; i++) b[i] = x * a[i]; Both for loops are amenable to parallelization 15
16
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. First OpenMP Attempt #pragma omp parallel { #pragma omp for for (i = 0; i < N; i++) a[i] = foo(i); #pragma omp single x = a[0] / a[N-1]; #pragma omp for for (i = 0; i < N; i++) b[i] = x * a[i]; } Synchronization among threads required if x is shared and one thread performs assignment 16 Implicit Barrier
17
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. After Work Replication #pragma omp parallel private (x) { x = foo(0) / foo(N-1); #pragma omp for for (i = 0; i < N; i++) { a[i] = foo(i); b[i] = x * a[i]; } 17
18
Copyright © 2009, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. * Other brands and names are the property of their respective owners. References Rohit Chandra, Leonardo Dagum, Dave Kohr, Dror Maydan, Jeff McDonald, and Ramesh Menon, Parallel Programming in OpenMP, Morgan Kaufmann (2001). Peter Denning, “The Locality Principle,” Naval Postgraduate School (2005). Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill (2004). 18
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.