3/12/2013Computer Engg, IIT(BHU)1 OpenMP-2. Environment Variables OMP_NUM_THREADS OMP_SCHEDULE.

Slides:



Advertisements
Similar presentations
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
Advertisements

Indian Institute of Science Bangalore, India भारतीय विज्ञान संस्थान बंगलौर, भारत Supercomputer Education and Research Centre (SERC) Adapted from: o “MPI-Message.
Mohsan Jameel Department of Computing NUST School of Electrical Engineering and Computer Science 1.
OpenMP Mark Reed UNC Research Computing
1 OpenMP—An API for Shared Memory Programming Slides are based on:
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ OpenMP.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Budapest, November st ALADIN maintenance and phasing workshop Short introduction to OpenMP Jure Jerman, Environmental Agency of Slovenia.
Programming with Shared Memory Introduction to OpenMP
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 5 Shared Memory Programming with OpenMP An Introduction to Parallel Programming Peter Pacheco.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Introduction to OpenMP
Parallel Programming in Java with Shared Memory Directives.
ME964 High Performance Computing for Engineering Applications “The inside of a computer is as dumb as hell but it goes like mad!” Richard Feynman © Dan.
UNIT -6 PROGRAMMING SHARED ADDRESS SPACE PLATFORMS THREAD BASICS PREPARED BY:-H.M.PATEL.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
MPI3 Hybrid Proposal Description
OpenMP China MCP.
1 OpenMP Writing programs that use OpenMP. Using OpenMP to parallelize many serial for loops with only small changes to the source code. Task parallelism.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
OpenMP Martin Kruliš Jiří Dokulil. OpenMP OpenMP Architecture Review Board Compaq, HP, Intel, IBM, KAI, SGI, SUN, U.S. Department of Energy,…
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
Introduction to OpenMP
9/22/2011CS4961 CS4961 Parallel Programming Lecture 9: Task Parallelism in OpenMP Mary Hall September 22,
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-3. OMP_INIT_LOCK OMP_INIT_NEST_LOCK Purpose: ● This subroutine initializes a lock associated with the lock variable.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
10/05/2010CS4961 CS4961 Parallel Programming Lecture 13: Task Parallelism in OpenMP Mary Hall October 5,
CPE779: More on OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
Real-Time Threads Time dependent, usually wrt deadline –Periodic –Aperiodic –Sporadic Priority scheduled Fault tolerant.
OpenMP Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Embedded Systems MPSoC Architectures OpenMP: Exercises Alberto Bosio
B. Estrade, LSU – High Performance Computing Enablement Group OpenMP II B. Estrade.
OpenMP An API : For Writing Portable SMP Application Software Rider NCHC GTD.
MPSoC Architectures OpenMP Alberto Bosio
Introduction to OpenMP
Martin Kruliš Jiří Dokulil
Shared Memory Parallelism - OpenMP
CS427 Multicore Architecture and Parallel Computing
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Improving Barrier Performance Dr. Xiao Qin.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing A bug in the rwlock program Dr. Xiao Qin.
Introduction to Parallel Programming
Computer Engg, IIT(BHU)
Introduction to OpenMP
Shared-Memory Programming
Computer Science Department
Multi-core CPU Computing Straightforward with OpenMP
Programming with Shared Memory
Introduction to High Performance Computing Lecture 20
Programming with Shared Memory Introduction to OpenMP
Computational Methods in Astrophysics
KAUST Winter Enhancement Program 2010 (WE 244)
Introduction to OpenMP
CS333 Intro to Operating Systems
OpenMP Martin Kruliš.
OpenMP Parallel Programming
Parallel Programming with OPENMP
WorkSharing, Schedule, Synchronization and OMP best practices
Presentation transcript:

3/12/2013Computer Engg, IIT(BHU)1 OpenMP-2

Environment Variables OMP_NUM_THREADS OMP_SCHEDULE

Run time environment omp_get_num_threads() omp_get_thread_num() omp_in_parallel Routines related to locks

Lock routines ● Will only discuss simple lock: may not be locked if already in a locked state. ● Simple lock interface: ➢ Type: omp_lock_t ➢ Operations: omp_init_lock(omp_lock_t *a) omp_destroy_lock(omp_lock_t *a) omp_set_lock(omp_lock_t *a) omp_unset_lock(omp_lock_t *a) omp_test_lock(omp_lock_t *a)

Lock routines ● omp_init_lock initializes the lock. After the call, the lock is unset. ● omp_destroy_lock destroys the lock. The lock must be unset before this call. ● omp_set_lock attempts to set the lock. If the lock is already set by another thread, it will wait until the lock is no longer set, and then sets it. ● omp_unset_lock unsets the lock. It should only be called by the same thread that set the lock; the consequences of doing otherwise are undefined. ● omp_test_lock attempts to set the lock. If the lock is already set by another thread, it returns 0; if it managed to set the lock, it returns 1.

OpenMP :User level library routines  omp_get_num_threads() : The omp_get_num threads library routine enables the programmer to retrieve the number of threads in the current team.  omp_get_thread_num() : Returns the number of the calling thread as an integer value.  omp_get_num() : procs returns, as an integer, the total number of processors available to the program at the instant in which it is called.  omp_in_parallel() : returns true if it is called from within an active parallel region Otherwise, it returns false.

OMP_SET_NUM_THREADS Purpose: ● Sets the number of threads that will be used in the next parallel region. Must be a positive integer.

OMP_GET_MAX_THREADS Purpose: ● Returns the maximum value that can be returned by a call to the OMP_GET_NUM_THREADS function.

OMP_GET_THREAD_LIMIT Purpose: ● New with OpenMP 3.0. Returns the maximum number of OpenMP threads available to a program.

OMP_GET_NUM_PROCS Purpose: ● Returns the number of processors that are available to the program.

OMP_SET_DYNAMIC Purpose: ● Enables or disables dynamic adjustment (by the run time system) of the number of threads available for execution of parallel regions.

OMP_GET_DYNAMIC Purpose: ● Used to determine if dynamic thread adjustment is enabled or not.

OMP_SET_NESTED Purpose: ● Used to enable or disable nested parallelism.

OMP_GET_NESTED Purpose: ● Used to determine if nested parallelism is enabled or not.

OMP_SET_SCHEDULE Purpose: ● This routine is new with OpenMP version 3.0 ● This routine sets the schedule type that is applied when the loop directive specifies a runtime schedule.

OMP_GET_SCHEDULE Purpose: ● This routine is new with OpenMP version 3.0 ● This routine returns the schedule that is applied when the loop directive specifies a runtime schedule.

OMP_SET_MAX_ACTIVE_LEVELS Purpose: ● This routine is new with OpenMP version 3.0 ● This routine limits the number of nested active parallel regions.

OMP_GET_MAX_ACTIVE_LEVELS Purpose: ● This routine is new with OpenMP version 3.0 ● This routine returns the maximum number of nested active parallel regions.

OMP_GET_LEVEL Purpose: ● This routine is new with OpenMP version 3.0 ● This routine returns the number of nested parallel regions enclosing the task that contains the call.

OMP_GET_ANCESTOR_THREAD_NUM Purpose: ● This routine is new with OpenMP version 3.0 ● This routine returns, for a given nested level of the current thread, the thread number of the ancestor or the current thread.

OMP_GET_TEAM_SIZE Purpose: ● This routine is new with OpenMP version 3.0 ● This routine returns, for a given nested level of the current thread, the size of the thread team to which the ancestor or the current thread belongs.

OMP_GET_ACTIVE_LEVEL Purpose: ● This routine is new with OpenMP version 3.0 ● The omp_get_active_level routine returns the number of nested, active parallel regions enclosing the task that contains the call.

OMP_IN_FINAL Purpose: ● This routine is new with OpenMP version 3.0 ● This routine returns true if the routine is executed in a final task region; otherwise, it returns false.