Presentation is loading. Please wait.

Presentation is loading. Please wait.

C66x KeyStone Training OpenMP: An Overview.  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with.

Similar presentations


Presentation on theme: "C66x KeyStone Training OpenMP: An Overview.  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with."— Presentation transcript:

1 C66x KeyStone Training OpenMP: An Overview

2  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with OpenMP on 6678 Agenda

3  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with OpenMP on 6678 Agenda

4 Motivation: TI Multicore Perspective Test and AutomationMission Critical Medical Imaging Emerging Emerging Broadband Multichannel & Next Generation Video – H.265, SVC, etc. High Performance Compute

5 Earlier with Single Core – New, faster processor would give desired performance boost – Faster execution speed was a result of better hardware – Minimal effort from software developers – Porting sequential code was straight forward Now with Multicore – Boost in performance not only function of hardware – Need to master software techniques that leverage inherent parallelism of multicore device – Every semiconductor vendor has own software solution – Many new to multicore software development and have existing sequential code to port Motivation: Migrate SW from Single to Multicore

6 An efficient way to program multicore that is:  Easy to use and quick to implement  Scalable  Sequential-coder friendly  Portable and widely adopted Motivation: The Need

7  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with OpenMP on 6678 Agenda

8 What is OpenMP?  An API for writing multi-threaded applications  API includes compiler directives and library routines  C, C++, and Fortran support  Standardizes last 20 years of Shared-Memory Programming (SMP) practice The OpenMP Solution

9 How does OpenMP address the needs?  Easy to use and quick to implement  Minimal modification to source code  Compiler figures out details  Scalable  Minimal or no code changes to add cores to implementation  Sequential-coder friendly  Allows incremental parallelization v/s all-or-nothing approach  Allows unified code base for sequential and parallel versions  Portable and widely adopted  Ideal for shared-memory parallel (SMP) architectures  Open-source and community-driven effort  Architecture Review Board includes: TI, Cray, Intel, NVidia, AMD, IBM, HP, Microsoft and others The OpenMP Solution:

10  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with OpenMP on 6678 Agenda

11 Features: OpenMP Solution Stack Runtime library OS/system Directives, Compiler OpenMP library Environment variables Application End User System layer User layer Prog. Layer (OpenMP API)

12  Compiler Directives and Clauses:  Specifies instructions to execute in parallel and its distribution across cores  Example: #pragma omp construct [clause [clause].. ]  Library Routines:  Execution Environment Routines  Configure and monitor threads, processors, and parallel environment  Example: int omp_set_num_threads (int)  Lock Routines  Synchronization with OpenMP locks  Example: void omp_set_lock (omp_lock_t *)  Timing Routines  Support portable wall clock timer  Example: double omp_get_wtime(void)  Environment Variables:  Alter execution features of applications like default number of threads, loop iteration scheduling, etc.  Example : OMP_NUM_THREADS Features: OpenMP API Consists of…

13  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Create Teams of Threads  Share Work among Threads  Manage Data-Scoping  Synchronize Threads and Variables  Getting Started with OpenMP on 6678 Agenda

14  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Create Teams of Threads  Share Work among Threads  Manage Data-Scoping  Synchronize Threads and Variables  Getting Started with OpenMP on 6678 Agenda

15  Create Teams of Threads  Fork-Join Model  Execute code in a parallel region  Implemented by using compiler directive #pragma omp parallel  Nesting ‘parallel’ directives is possible, allowing multilevel parallelism Fork Join Master Thread Team of Threads (created automatically) Master Thread Sequential Region Parallel Region Starts ID:0 ID:1 ID:2 ID:3 Threads execute simultaneously Parallel Region Ends Wait till all threads terminate Sequential Region Implementation: Use OpenMP to…

16 #include void main() { omp_set_num_threads(4); #pragma omp parallel { int tid = omp_get_thread_num(); printf ("Hello World from thread = %d\n", tid); } Include Header API definitions Library Function Set # of threads (typically # of cores) Compiler Directive Fork team of threads Library Function Get thread id Implicit Barrier Implementation: Parallel Construct

17  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Create Teams of Threads  Share Work among Threads  Manage Data-Scoping  Synchronize Threads and Variables  Getting Started with OpenMP on 6678 Agenda

18  Share Work among Threads  By default each thread redundantly executes all code in // region  Programmer can insert work-sharing constructs to express how computation should be distributed  Example: Distribute for loop  Applicable only to loops where iterations are independent, i.e. changing order of execution won’t matter  #pragma omp for  Example: Distribute multiple tasks  #pragma omp section Implementation: Use OpenMP to…

19 Implementation: Work-sharing Constructs for(i=0;i<N;i++) { a[i] = a[i] + b[i]; } #pragma omp parallel { int id, i, Nthrds, istart, iend; id = omp_get_thread_num(); Nthrds = omp_get_num_threads(); istart = id * N / Nthrds; iend = (id+1) * N / Nthrds; for(i=istart;i<iend;i++) { a[i] = a[i] + b[i]; } } #pragma omp parallel #pragma omp for for(i=0;i<N;i++) { a[i] = a[i] + b[i]; } Sequential Code Only with Parallel Construct Parallel and Work-sharing Constructs Source: Reference #3

20 Implementation: Work-sharing Constructs #pragma omp parallel #pragma omp sections { #pragma omp section x_calculation(); #pragma omp section y_calculation(); #pragma omp section z_calculation(); } By default, there is a barrier at the end of the “omp sections” Use the “nowait” clause to turn off the barrier. Source: Reference #5

21  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Create Teams of Threads  Share Work among Threads  Manage Data-Scoping  Synchronize Threads and Variables  Getting Started with OpenMP on 6678 Agenda

22  Manage Data-scoping using Clauses  Control how variables should be treated in a parallel region  Clauses  private clause  Each thread has a private copy of this variable and a unique value throughout the parallel construct  Variable declared inside parallel region is automatically private  Stored in thread stack; default size set by compiler but can override  shared clause  Same copy of this variable is seen by all threads  Variable declared outside parallel region is automatically shared (part of MSMC or DDR3)  default clause  Override default scope assigned to any variable  Set to none to explicitly specify scope of all variables used inside //  Programmer’s responsibility to declare which variables are shared / private  Some variables like iteration counts, the compiler automatically enforces Implementation: Use OpenMP to …

23 Implementation: Data-Scoping Clauses #pragma omp parallel for default (none) private( i, j, sum ) shared (A, B, C) if (flag) { for (i = 0, i < 10; i++) { sum = 0; for ( j = 0; j < 20; j++ ) sum += B[ i ][ j ] * C [ j ]; A[ i ] = sum; }

24  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Create Teams of Threads  Share Work among Threads  Manage Data-Scoping  Synchronize Threads and Variables  Getting Started with OpenMP on 6678 Agenda

25  Synchronize Threads  Synchronization at the end of work-sharing or // construct is automatic  Synchronizing subset of threads has to be manually handled  Some Synchronization Directives:  #pragma omp critical  Only one thread may enter at a time  Applies to block of code  If critical sections are unnamed, threads will not enter any of them  #pragma omp atomic  Hardware provides atomic operation for expression  Applies to line of code (expression like X+=5)  Less overhead but less portability and limited to specific operations  #pragma omp barrier  Each thread waits until all threads arrive  #pragma omp flush[optional list]  User can creates sequence point for consistent view of memory  Implicit barriers automatically ensure cache coherency Implementation: Use OpenMP to …

26 Implementation: Synchronization Constructs int sum = 0, i; int A [100] = populate(); #pragma omp for shared (sum, array) { for (i = 0, i < 100; i++) { #pragma omp atomic sum += A [ i ]; }

27 Implementation: Reduction Construct int sum = 0, i; int A [100] = populate(); #pragma omp for shared (A) reduction (+:sum) { for (i = 0, i < 100; i++) { sum += A [ i ]; }  Reduction creates private copy of shared variable for each thread  At end of parallel loop, private copies of variable are ‘reduced’ back into original shared variable and operator (‘+’) is applied

28  Motivation: The Need  The Solution: OpenMP  OpenMP Features  OpenMP Implementation  Getting Started with OpenMP on 6678 Agenda

29 Each core runs SYS/BIOS RTOS OpenMP master and worker threads execute inside dedicated SYS/BIOS tasks IPC is used for communication and synchronization. OpenMP run-time state and user data is allocated in shared memory Source: Reference #3 OpenMP on 6678: Solution Stack

30 OpenMP Specification 3.0 support available as part of upcoming MCSDK 2.1. Compiler support from version 7.4 or higher Currently available: MCSDK v2.1 with OMP 1.1 MCSDK 2.1 includes “OMP” package w/ OpenMP programming layer and runtime, and CodeGen 7.4.x compiler. OpenMP on 6678: Availability

31 We will see how to: Access example OpenMP projects from CCS v5.1.1 Include OpenMP header file #include Specify number of cores in project configuration.cfg OpenMP.setNumProcessors(4); Provide --omp compiler option…available as a check box in project settings on CCSv5.1.1 Build  C6000 Compiler  Advanced Options  Advanced Optimizations  Enable Support for OpenMP 3.0 OpenMP on 6678: CCS Demo

32 OpenMP on 6678: Spawning Threads pop CoreN Create_Task Event Queue N Event Queue 2 Core2 Create_Task Core1 Create_Task Event Queue 1 Core0 Create_Task Event Queue 0 Scheduler pop Create_Thread push Use of an event queue for each core for task assignments Scheduler keeps track of the number of threads per core to distribute threads evenly on the cores Source: Reference #3

33 OpenMP on 6678: Creating a Parallel Region Compiler extension translates the directives into calls to runtime library functions #pragma omp parallel Structured Code; } { Setup data; Gomp_parallel_start (&subfunction, &data, num_threads) Gomp_parallel_end(); Void subfunction (void *data) { use data; } Structured Code; Subfunction(&data); Compiler Translation Source: Reference #3

34 1.Using OpenMP, B. Chapman, G. Jost, R. Van Der Pas http://www.amazon.com/Using- OpenMP-Programming-Engineering-Computation/dp/0262533022/http://www.amazon.com/Using- OpenMP-Programming-Engineering-Computation/dp/0262533022/ 2.Introduction to OpenMP http://community.topcoder.com/tc?module=Static&d1=features&d2=091106 http://community.topcoder.com/tc?module=Static&d1=features&d2=091106 3.Multiple Presentations Eric Stotzer (TI), Barbara Chapman (UH), Yogesh Siraswar(TI) 4.OpenMP Specification, http://openmp.org/wp/openmp-specifications/http://openmp.org/wp/openmp-specifications/ 5.OpenMP Runtime for SYS/BIOS User’s Guide Included in OMP/docs folder when you install MCSDK 2.1 6.MCSDK 2.1 Addendum Included in MCSDK/docs folder when you install MCSDK 2.1 7.TI Internal MCSDK alpha download link: http://tigt_qa.gt.design.ti.com/qacm/test_area/BIOS- MCSDK/02_01_00_00/index_FDS.html http://tigt_qa.gt.design.ti.com/qacm/test_area/BIOS- MCSDK/02_01_00_00/index_FDS.html References

35 Backup Slides

36 OpenMP on 6678: Starting Points Code Composer Studio Latest TI MCSDK Alpha-3 releases today, April 27 Externally available with MCSDK v2.1 release, May 14 MCSDK will include an OMP package w/ OpenMP programming layer and runtime, and CodeGen 7.4.x compiler We will now switch to the Code Composer Studio Environment to explore what the OMP package contains and how to start with building and running an OpenMP example on 6678

37 Heterogeneous OpenMP  2 orthogonal problems  OpenMP assumes a single shared address space (and not currently there for distributed space)  No mechanism in OpenMP for allocating work specific processors on an architecture and also working acorss 2 different operating systems  Key Points  Memory subsystems of GPUs  More than one accelerator  Each accelerator using its own separate memory  Need to extend OpenMP to support this  Why not use OpenCL?  Because abstraction is low-level…better solution compared to CUDA but not as high-level as OpenMP  Current approaches under investigation  StarPU, Barcelona Super Computing, Multicore Association including MCAPI, MRAPI, etc.

38 TI & OpenMP: History Former TI-er Shreyas (CA) worked on initial version of package Took LibGOMP (under gcc compiler) and implemented it on top of SYS/BIOS TI-er Yogesh (MD) added QMSS-based approach and continues to work on speeding up runtime

39 Backup  SMP – processing engines are identical, and shared memory through which that they can communicate (these are two criteria that openMP needs)  Well known – OpenMP Symmetric multiprocessing –  Distributed memory systems – MPI  OpenCL – host talking to a general purpose computing GPU  At TI we don’t exclusively tie you to any of them, but we focus here on symmetric

40 Features: OpenMP API consists of…  Compiler directives and clauses:  Specifies instructions to execute in parallel and its distribution across cores  Example: #pragma omp parallel for reduction (+:sum) for (i=1; i<=n; i++) { sum = sum + a[i]*b[i]; }  Library routines:  Execution Environment Routines  Affect and monitor threads, processors, and parallel environment  Example: int omp_set_num_threads (int)  Lock Routines  Synchronization with OpenMP locks  Example: void omp_set_lock (omp_lock_t *)  Timing Routines  Support portable wall clock timer  Example: double omp_get_wtime(void)  Environment variables :  Alter execution features of applications like default number of threads, loop iteration scheduling, etc.  Example: OMP_NUM_THREADS

41 Motivation: Parallelism  Instruction-level: Multiple functional units  Architectural: Multiple processors, shared memory  Multithreading: Interleaving instructions from multiple application threads  Multicore: Replicate substantial parts of a processor’s logic on a single chip  Software techniques to leverage hardware parallelism

42 Matrix Multiply Example #pragma omp parallel for { for (i = 0, i < 10; i++) { sum = 0; for ( j = 0; j < 20; j++ ) sum += B[ i ][ j ] * C [ j ]; }

43 Implementation: Memory Model  Cache coherency  Typically consistency of shared variables in cacheable memory handled automatically at synchronization points  However, current TI implementation makes it programmer’s responsibility  Can use #pragma omp flush or Cache_wbInvAll() within

44 OpenMP on TI Multicore SYS/BIOS RTOS runs on each core and enables real-time multithreading Navigator Module allows synchronization between threads using QMSS Inter-Processor Communication (IPC) module allows dynamic management of shared memory and communication


Download ppt "C66x KeyStone Training OpenMP: An Overview.  Motivation: The Need  The OpenMP Solution  OpenMP Features  OpenMP Implementation  Getting Started with."

Similar presentations


Ads by Google