Multi-core Programming Threading Methodology. 2 Topics A Generic Development Cycle.

Slides:



Advertisements
Similar presentations
Shared-Memory Model and Threads Intel Software College Introduction to Parallel Programming – Part 2.
Advertisements

Intel Software College Tuning Threading Code with Intel® Thread Profiler for Explicit Threads.
OpenMP Optimization National Supercomputing Service Swiss National Supercomputing Center.
Practical techniques & Examples
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
Reference: Message Passing Fundamentals.
Cc Compiler Parallelization Options CSE 260 Mini-project Fall 2001 John Kerwin.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel Programming Models and Paradigms
Multiscalar processors
Chapter 9. Concepts in Parallelisation An Introduction
Threaded Programming Methodology Intel Software College.
SEC(R) 2008 Intel® Concurrent Collections for C++ - a model for parallel programming Nikolay Kurtov Software and Services.
Rechen- und Kommunikationszentrum (RZ) Parallelization at a Glance Christian Terboven / Aachen, Germany Stand: Version 2.3.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Prospector : A Toolchain To Help Parallel Programming Minjang Kim, Hyesoon Kim, HPArch Lab, and Chi-Keung Luk Intel This work will be also supported by.
Multi-core Programming Thread Profiler. 2 Tuning Threaded Code: Intel® Thread Profiler for Explicit Threads Topics Look at Intel® Thread Profiler features.
Multi-core Programming: Basic Concepts. Copyright © 2006, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered.
Multi-core Programming Threading Concepts. 2 Basics of VTune™ Performance Analyzer Topics A Generic Development Cycle Case Study: Prime Number Generation.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Designing and Evaluating Parallel Programs Anda Iamnitchi Federated Distributed Systems Fall 2006 Textbook (on line): Designing and Building Parallel Programs.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
CS453 Lecture 3.  A sequential algorithm is evaluated by its runtime (in general, asymptotic runtime as a function of input size).  The asymptotic runtime.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Hybrid MPI and OpenMP Parallel Programming
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 4: Multithreaded Programming.
Thinking in Parallel – Implementing In Code New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
CSC Multiprocessor Programming, Spring, 2012 Chapter 11 – Performance and Scalability Dr. Dale E. Parson, week 12.
TM Parallel Concepts An introduction. TM The Goal of Parallelization Reduction of elapsed time of a program Reduction in turnaround time of jobs Overhead:
Finding concurrency Jakub Yaghob. Finding concurrency design space Starting point for design of a parallel solution Analysis The patterns will help identify.
1 How to do Multithreading First step: Sampling and Hotspot hunting Myongji University Sugwon Hong 1.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Parallel Computing Presented by Justin Reschke
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
Concurrency and Performance Based on slides by Henri Casanova.
1/46 PARALLEL SOFTWARE ( SECTION 2.4). 2/46 The burden is on software From now on… In shared memory programs: Start a single process and fork threads.
Tuning Threaded Code with Intel® Parallel Amplifier.
1 Parallel Processing Fundamental Concepts. 2 Selection of an Application for Parallelization Can use parallel computation for 2 things: –Speed up an.
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
Chapter 4: Threads.
Parallel Programming By J. H. Wang May 2, 2017.
Computer Engg, IIT(BHU)
Task Scheduling for Multicore CPUs and NUMA Systems
Parallel Algorithm Design
Threaded Programming Methodology
Threaded Programming Methodology
Intel® Parallel Studio and Advisor
Chapter 4: Threads.
Optimization Techniques For Maximizing Application Performance on Multi-Core Processors Kittur Ganesh 11/16/2018.
Threading Methodology using OpenMP*
Threading Methodology using OpenMP*
Multithreading Why & How.
Chapter 4: Threads & Concurrency
Chapter 4: Threads.
Lecture 2 The Art of Concurrency
Mattan Erez The University of Texas at Austin
Parallel Programming in C with MPI and OpenMP
Mattan Erez The University of Texas at Austin
CSC Multiprocessor Programming, Spring, 2011
Presentation transcript:

Multi-core Programming Threading Methodology

2 Topics A Generic Development Cycle

3 What is Parallelism? Two or more processes or threads execute at the same time Parallelism for threading architectures – Multiple processes Communication through Inter-Process Communication (IPC) – Single process, multiple threads Communication through shared memory

4 n = number of processors T parallel = { (1-P) + P/n } T serial Speedup = T serial / T parallel Amdahl’s Law Describes the upper bound of parallel execution speedup Serial code limits speedup (1-P) P T serial (1-P) P/ /0.75 = 1.33 n = 2 n = ∞ ∞ P/ ∞ … /0.5 = 2.0

5 Processes and Threads Modern operating systems load programs as processes Resource holder Execution A process starts executing at its entry point as a thread Threads can create other threads within the process – Each thread gets its own stack All threads within a process share code & data segments Code segment Data segment thread main() … thread Stack

6 Threads – Benefits & Risks Benefits – Increased performance and better resource utilization Even on single processor systems - for hiding latency and increasing throughput – IPC through shared memory is more efficient Risks – Increases complexity of the application – Difficult to debug (data races, deadlocks, etc.)

7 Commonly Encountered Questions with Threading Applications Where to thread? How long would it take to thread? How much re-design/effort is required? Is it worth threading a selected region? What should the expected speedup be? Will the performance meet expectations? Will it scale as more threads/data are added? Which threading model to use?

8 Development Methodology Analysis – Find computationally intense code Design (Introduce Threads) – Determine how to implement threading solution Debug for correctness – Detect any problems resulting from using threads Tune for performance – Achieve best parallel performance

9 Development Cycle Analysis –VTune™ Performance Analyzer Design (Introduce Threads) –Intel® Performance libraries: IPP and MKL –OpenMP* (Intel® Compiler) –Explicit threading (Win32*, Pthreads*, TBB) Debug for correctness –Intel® Thread Checker –Intel Debugger Tune for performance –Intel® Thread Profiler –VTune™ Performance Analyzer

10 Threaded Programming Methodology Analysis - Sampling Use VTune Sampling to find hotspots in application bool TestForPrime(int val) { // let’s start checking from 3 int limit, factor = 3; limit = (long)(sqrtf((float)val)+0.5f); while( (factor <= limit) && (val % factor)) factor ++; return (factor > limit); } void FindPrimes(int start, int end) { // start is always odd int range = end - start + 1; for( int i = start; i <= end; i+= 2 ){ if( TestForPrime(i) ) globalPrimes[gPrimesFound++] = i; ShowProgress(i, range); } Identifies the time consuming regions

11 Threaded Programming Methodology Analysis - Call Graph This is the level in the call tree where we need to thread Used to find proper level in the call-tree to thread

12 Threaded Programming Methodology Analysis Where to thread? – FindPrimes() Is it worth threading a selected region? – Appears to have minimal dependencies – Appears to be data-parallel – Consumes over 95% of the run time Baseline measurement

13 Foster’s Design Methodology From Designing and Building Parallel Programs by Ian Foster Four Steps: – Partitioning Dividing computation and data – Communication Sharing data between computations – Agglomeration Grouping tasks to improve performance – Mapping Assigning tasks to processors/threads

14 Designing Threaded Programs Partition Partition – Divide problem into tasks Communicate Communicate – Determine amount and pattern of communication Agglomerate Agglomerate – Combine tasks Map Map – Assign agglomerated tasks to created threads The Problem Initial tasksCommunicationCombined Tasks Final Program

15 Parallel Programming Models Functional Decomposition – Task parallelism – Divide the computation, then associate the data – Independent tasks of the same problem Data Decomposition – Same operation performed on different data – Divide data into pieces, then associate computation

16 Decomposition Methods Functional Decomposition – Focusing on computations can reveal structure in a problem Grid reprinted with permission of Dr. Phu V. Luong, Coastal and Hydraulics Laboratory, ERDC Domain Decomposition Focus on largest or most frequently accessed data structure Data Parallelism Same operation applied to all data Atmosphere Model Ocean Model Land Surface Model Hydrology Model

17 Pipelined Decomposition Computation done in independent stages Functional decomposition – Threads are assigned stage to compute – Automobile assembly line Data decomposition – Thread processes all stages of single instance – One worker builds an entire car

18 LAME Encoder Example LAME MP3 encoder – Open source project – Educational tool used for learning The goal of project is – To improve the psychoacoustics quality – To improve the speed of MP3 encoding

19 LAME Pipeline Strategy Frame N Frame N + 1 Time Other N Prelude N Acoustics N Encoding N T 2 T 1 Acoustics N+1 Prelude N+1 Other N+1 Encoding N+1 Acoustics N+2 Prelude N+2 T 3 T 4 Prelude N+3 Hierarchical Barrier OtherPreludeAcousticsEncoding Frame Fetch next frame Frame characterization Set encode parameters Psycho Analysis FFT long/short Filter assemblage Apply filtering Noise Shaping Quantize & Count bits Add frame header Check correctness Write to disk

20 Threaded Programming Methodology Design What is the expected benefit? How do you achieve this with the least effort? How long would it take to thread? How much re-design/effort is required? Rapid prototyping with OpenMP Speedup(2P) = 100/(96/2+4) = ~1.92X

21 OpenMP Fork-join parallelism: – Master thread spawns a team of threads as needed – Parallelism is added incrementally Sequential program evolves into a parallel program Parallel Regions Master Thread

22 Threaded Programming Methodology Design #pragma omp parallel for for( int i = start; i <= end; i+= 2 ){ if( TestForPrime(i) ) globalPrimes[gPrimesFound++] = i; ShowProgress(i, range); } OpenMP Create threads here for this parallel region for Divide iterations of the for loop

23 Debugging for Correctness Intel® Thread Checker pinpoints notorious threading bugs like data races, stalls and deadlocks Intel ® Thread Checker VTune™ Performance Analyzer +DLLs (Instrumented) Binary Instrumentation.exe (Instrumented) Runtime Data Collector threadchecker.thr (result file)

24 Threaded Programming Methodology Thread Checker

25 Threaded Programming Methodology Debugging for Correctness #pragma omp parallel for for( int i = start; i <= end; i+= 2 ){ if( TestForPrime(i) ) #pragma omp critical globalPrimes[gPrimesFound++] = i; ShowProgress(i, range); } #pragma omp critical { gProgress++; percentDone = (int)(gProgress/range *200.0f+0.5f) } Will create a critical section for this reference Will create a critical section for both these references

26 Common Performance Issues Parallel Overhead – Due to thread creation, scheduling … Synchronization – Excessive use of global data, contention for the same synchronization object Load Imbalance – Improper distribution of parallel work Granularity – No sufficient parallel work

27 Threaded Programming Methodology Tuning for Performance Thread Profiler pinpoints performance bottlenecks in threaded applications Thread Profiler Thread Profiler VTune™ Performance Analyzer +DLL’s (Instrumented) Binary Instrumentation.c.exe (Instrumented) Runtime Data Collector Bistro.tp/guide.gvs (result file) Compiler Source Instrumentation.exe /Qopenmp_profile

28 Thread Profiler for OpenMP

29 Thread Profiler for OpenMP Speedup Graph Estimates threading speedup and potential speedup – Based on Amdahl’s Law computation Gives upper and lower bound estimates

30 Threaded Programming Methodoloy Thread Profiler for OpenMP serial parallel

31 Thread Profiler for OpenMP Thread 0Thread 1Thread 2Thread 3

32 Thread Profiler (for Explicit Threads)

33 Thread Profiler (for Explicit Threads) Why so many transitions?

34 Performance This implementation has implicit synchronization calls This limits scaling performance due to the resulting context switches Back to the design stage

35 Concepts to Consider Parallel Overhead Synchronization Issues Load Balance Processing Granularity

36 Parallel Overhead Thread Creation overhead – Overhead increases rapidly as the number of active threads increases Solution – Use of re-usable threads and thread pools Amortizes the cost of thread creation Keeps number of active threads relatively constant

37 Synchronization Heap contention – Allocation from heap causes implicit synchronization – Allocate on stack or use thread local storage Atomic updates versus critical sections – Some global data updates can use atomic operations (Interlocked family) – Use atomic updates whenever possible Critical Sections versus mutual exclusion – Critical Section objects reside in user space – Use CRITICAL SECTION objects when visibility across process boundaries is not required – Introduces lesser overhead – Has a spin-wait variant that is useful for some applications

38 Load Imbalance Unequal work loads lead to idle threads and wasted time Time Busy Idle

39 Granularity Coarse grain Fine grain Parallelizable portion Serial Parallelizable portion Serial Scaling: ~3X Scaling: ~1.10X Scaling: ~2.5X Scaling: ~1.05X

40 Recap Four step development cycle for writing threaded code from serial and the Intel® tools that support each step – Analysis – Design (Introduce Threads) – Debug for correctness – Tune for performance Threading applications require multiple iterations of designing, debugging and performance tuning steps Use tools to improve productivity This is an iterative process