Shared-Memory Paradigm & OpenMP

Slides:



Advertisements
Similar presentations
Parallel Processing with OpenMP
Advertisements

Introduction to Openmp & openACC
Introductions to Parallel Programming Using OpenMP
NewsFlash!! Earth Simulator no longer #1. In slightly less earthshaking news… Homework #1 due date postponed to 10/11.
May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Indian Institute of Science Bangalore, India भारतीय विज्ञान संस्थान बंगलौर, भारत Supercomputer Education and Research Centre (SERC) Adapted from: o “MPI-Message.
Mohsan Jameel Department of Computing NUST School of Electrical Engineering and Computer Science 1.
PARALLEL PROGRAMMING WITH OPENMP Ing. Andrea Marongiu
1 OpenMP—An API for Shared Memory Programming Slides are based on:
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
INTEL CONFIDENTIAL OpenMP for Domain Decomposition Introduction to Parallel Programming – Part 5.
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Shared Memory Parallelization Outline What is shared memory parallelization? OpenMP Fractal Example False Sharing Variable scoping Examples on sharing.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
Parallel Programming in Java with Shared Memory Directives.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
ECE 1747 Parallel Programming Shared Memory: OpenMP Environment and Synchronization.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
Lecture 8: OpenMP. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism / Implicit parallelism.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
Hybrid MPI and OpenMP Parallel Programming
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
OpenMP fundamentials Nikita Panov
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Threaded Programming Lecture 4: Work sharing directives.
Introduction to OpenMP
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Special Topics in Computer Engineering OpenMP* Essentials * Open Multi-Processing.
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
COMP7330/7336 Advanced Parallel and Distributed Computing OpenMP: Programming Model Dr. Xiao Qin Auburn University
OpenMP – Part 2 * *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
NPACI Parallel Computing Institute August 19-23, 2002 San Diego Supercomputing Center S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED.
OpenMP An API : For Writing Portable SMP Application Software Rider NCHC GTD.
Parallel Programming in C with MPI and OpenMP
Introduction to OpenMP
SHARED MEMORY PROGRAMMING WITH OpenMP
Shared Memory Parallelism - OpenMP
SHARED MEMORY PROGRAMMING WITH OpenMP
Shared-memory Programming
CS427 Multicore Architecture and Parallel Computing
Computer Engg, IIT(BHU)
Introduction to OpenMP
Shared-Memory Programming
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
SHARED MEMORY PROGRAMMING WITH OpenMP
Computer Science Department
Multi-core CPU Computing Straightforward with OpenMP
Programming with Shared Memory
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Wed. Jan. 31, 2001 *Parts.
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson *Parts of this lecture.
Introduction to High Performance Computing Lecture 20
Programming with Shared Memory Introduction to OpenMP
Distributed Systems CS
Introduction to OpenMP
M4 and Parallel Programming
Programming Parallel Computers
Presentation transcript:

Shared-Memory Paradigm & OpenMP FDI 2007 Track Q Day 3 – Morning Session

Characteristics of Shared-Memory Machines Modest number of processors, e.g., 8, 16, 32 One bus, one memory, one address space. Major issues: bus contention, cache coherency, synchronization. Incremental parallelization is easy. NUMA vs. UMA: the former scales better, but with increased latency for some data. Important trends in computer architecture imply that shared-memory parallelism will be increasingly important, i.e., multicore. Major issues: with openmp, you don’t have to worry about the 1st two problems, and even synchronization isn’t usually a problem … at least as far as correctness goes. But performance? Well … that can definitely be important. Example of bus contention and cache coherency overhead. For I = 1, n do in parallel a[I] = b[I] + c[I] /// note benefit of chunking NUMA not that common right now. Dual core. Hybrid codes? Not 1st priority. But may become increasingly important.

OpenMP Compiler directives, library routines, and environment variables for specifying shared-memory thread-based parallelism. Fortran and C/C++ specified. Supported by many compilers. Directives allow work sharing, synchronization, and sharing and privatizing of data. Directives are ignored by compiler unless command-line option is specified. What’s a thread?

OpenMP Other aspects of the OpenMP model: For further information: Explicit, user-defined parallelism SPMD Fork/join model: only a master thread is executing when outside a parallel region. The parallel directive causes multiple threads to be started (or continued), each executing all or a part of the specified block. For further information: www.openmp.org OpenMP Quick Guide (pdf)

Loop-based Parallelism in OpenMP #pragma omp parallel for [clause [ clause ...]] for ( … ) { } where clause is one of the following: private (list) shared (list) copyin (list) firstprivate (list) lastprivate (list) reduction (operator: list) ordered schedule(kind [, chunk_size]) nowait

Loop-based Parallelism in OpenMP: Example sum = 0 #pragma omp parallel for \ private( ... ) \ shared ( ... ) \ reduction ( ... ) for (i=0; i<n; i++) { k = 2*i-1; j = k + 1; x = a[k] * sin(pi*b[k]); c[i] = func(a[j], b[j], x) * beta; sum = sum + c[i]*c[i]; }

Loop-based Parallelism in OpenMP: A 2nd Example Monte Carlo estimate of pi: generate random coordinate (x,y) in unit square and count how many land inside unit circle.