ME964 High Performance Computing for Engineering Applications “Part of the inhumanity of the computer is that, once it is competently programmed and working.

Slides:



Advertisements
Similar presentations
MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
Advertisements

Practical techniques & Examples
Parallel Programming Patterns Eun-Gyu Kim June 10, 2004.
Introduction to Parallel Computing
1 Chapter 1 Why Parallel Computing? An Introduction to Parallel Programming Peter Pacheco.
Vector Processing. Vector Processors Combine vector operands (inputs) element by element to produce an output vector. Typical array-oriented operations.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 14: Basic Parallel Programming Concepts.
Instructor Notes This lecture discusses three important optimizations The performance impact of mapping threads to data on the GPU is subtle but extremely.
CISC 879 : Software Support for Multicore Architectures John Cavazos Dept of Computer & Information Sciences University of Delaware
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
Reference: Message Passing Fundamentals.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Structuring Parallel Algorithms.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel Programming Models and Paradigms
CISC 879 : Software Support for Multicore Architectures John Cavazos Dept of Computer & Information Sciences University of Delaware
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
© John A. Stratton, 2014 CS 395 CUDA Lecture 6 Thread Coarsening and Register Tiling 1.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
Rechen- und Kommunikationszentrum (RZ) Parallelization at a Glance Christian Terboven / Aachen, Germany Stand: Version 2.3.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Designing and Evaluating Parallel Programs Anda Iamnitchi Federated Distributed Systems Fall 2006 Textbook (on line): Designing and Building Parallel Programs.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
ME964 High Performance Computing for Engineering Applications Parallel Programming Patterns Oct. 16, 2008.
Introduction to Parallel Rendering Jian Huang, CS 594, Spring 2002.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 12: Application Lessons When the tires.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
Design Concepts By Deepika Chaudhary.
Vector/Array ProcessorsCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Vector/Array Processors Reading: Stallings, Section.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/498AL, University of Illinois, Urbana-Champaign 1 ECE 408/CS483 Applied Parallel Programming.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Finding concurrency Jakub Yaghob. Finding concurrency design space Starting point for design of a parallel solution Analysis The patterns will help identify.
Data Structures and Algorithms in Parallel Computing Lecture 7.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 15: Basic Parallel Programming Concepts.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
3/12/2013Computer Engg, IIT(BHU)1 CONCEPTS-1. Pipelining Pipelining is used to increase the speed of processing It uses temporal parallelism In pipelining,
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
Parallel Computing Presented by Justin Reschke
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
1/50 University of Turkish Aeronautical Association Computer Engineering Department Ceng 541 Introduction to Parallel Computing Dr. Tansel Dökeroğlu
Department of Computer Science, Johns Hopkins University Lecture 7 Finding Concurrency EN /420 Instructor: Randal Burns 26 February 2014.
Parallel Patterns.
Conception of parallel algorithms
Parallel Programming By J. H. Wang May 2, 2017.
Parallel Programming Patterns
Computer Engg, IIT(BHU)
The University of Adelaide, School of Computer Science
Parallel Algorithm Design
COMP60621 Fundamentals of Parallel and Distributed Systems
Lecture 2 The Art of Concurrency
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Parallel Programming in C with MPI and OpenMP
COMP60611 Fundamentals of Parallel and Distributed Systems
Mattan Erez The University of Texas at Austin
Presentation transcript:

ME964 High Performance Computing for Engineering Applications “Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest.” Isaac Asimov © Dan Negrut, 2011 ME964 UW-Madison Parallel Programming Patterns One Slide Summary of ME964 April 7, 2011

Before We Get Started… Last Time OpenMP wrap up Variable scoping Synchronization issues Today Parallel programming patterns One slide summary of ME964 Other issues: No class on April 12 Assignment 7 due tonight Assignment 8 (last ME964 assignment) posted on the class website, due next Th Midterm exam on April 19 Review session the evening before From now on only guest lectures and such, time to concentrate on your projects 2

Recommended Reading Mattson, Sanders, Massingill, Patterns for Parallel Programming, Addison Wesley, 2005, ISBN Used for this presentation A good overview of challenges, best practices, and common techniques in all aspects of parallel programming Book is on reserve at Wendt Library 3

Objective Get exposed to techniques & best practices that have emerged as useful in the parallel programming practice They are expected to facilitate and/or help you with Thinking/Visualizing your problem as being solved in parallel Addressing functionality and performance issues in your parallel program design Discussing your design decisions with others Selecting an appropriate platform that helps you express & implement the parallel design for the solution you identified 4

Parallel Computing: When and Why. Parallel computing, prerequisites The problem can be decomposed into sub-problems that can be independently solved at the same time The part of the problem that is concurrent is large enough to justify an approach that exploits this concurrency Recall Amdhal’s law Parallel computing, goals Solve problems in less time and/or Solve bigger problems 5

Performance can be drastically reduced by many factors Overhead of parallel processing Load imbalance among processor elements Inefficient data sharing patterns Saturation of critical resources such as memory bandwidth 6 Parallel Computing, Caveats

Implementing a Parallel Solution to Your Problem: Key Steps 1) Find the concurrency in the problem 2) Structure the algorithm so that concurrency can be exploited 3) Implement the algorithm in a suitable programming environment 4) Tune the performance of the code on the target parallel system NOTE: The reality is that these have not been separated into levels of abstractions that can be dealt with independently. 7 HK-UIUC

What’s Next? Focus on this for a while 8 From “Patterns for Parallel Programming”

Dependence kills concurrency. Dependencies need to be identified and managed Dependencies prevent parallelism, at least the “embarrassingly parallel” flavor Often times, dependencies end up requiring synchronization barriers (not good…) Concurrency has some caveats In sequential execution: One step feeds result to the next steps ) a unique way moving from A to Z In parallel execution: numeric accuracy may be affected by ordering steps that parallel with each other ) platform dependent, OS dependent possibly even dependent on the state of the system Finding and exploiting concurrency often requires looking at the problem from a non-obvious angle 9 Finding and Nurturing Concurrency

Finding Concurrency in Problems Goal: Identify a decomposition of the problem into sub-problems that can be solved simultaneously In order to meet this goal: Perform a task decomposition to identify tasks that can execute concurrently Carry out data decomposition to identify data local to each task Identify a way of grouping tasks and ordering the groups to satisfy temporal constraints Carry out an analysis of the data sharing patterns among the concurrent tasks to avoid any race condition issues and optimize memory access Perform a design evaluation that assesses the quality of the choices made in all the steps 10

Finding Concurrency – The Process Task Decomposition Data Decomposition Data Sharing Order Tasks Decomposition Group Tasks Dependence Analysis Design Evaluation This is typically an iterative process, like an optimization process that has to negotiate several constraints 11

Find Concurrency 1: Decomp. Stage: Task Decomposition Many large problems have natural independent tasks The number of tasks used should be adjustable to the execution resources available. Each task must include sufficient work in order to compensate for the overhead of managing their parallel execution. Tasks should maximize reuse of sequential program code to minimize design/implementation effort. “In an ideal world, the compiler would find tasks for the programmer. Unfortunately, this almost never happens.” - Mattson, Sanders, Massingill 12 HK-UIUC

Example: Task Decomposition Square Matrix Multiplication P = M * N of WIDTH ● WIDTH One natural task (sub-problem) produces one element of P All tasks can execute in parallel M N P WIDTH 13 HK-UIUC

Find Concurrency 2: Decomp. Stage: Data Decomposition The most compute intensive parts of many large problems manipulate a large data structure Similar operations are being applied to different parts of the data structure, in a mostly independent manner. This is what CUDA is optimized for. The data decomposition should lead to Efficient data usage by each Unit of Execution (UE) within the partition Few dependencies between the UEs that work on different partitions Adjustable partitions that can be varied according to the hardware characteristics 14 HK-UIUC

Find Concurrency 3: Dependence Analysis - Tasks Grouping Sometimes several tasks in problem can be grouped to improve efficiency Reduced synchronization overhead – because when task grouping there is supposedly no need for synchronization All tasks in the group efficiently share data loaded into a common on-chip, shared storage (Shared Memory) 15

Task Grouping Example - Square Matrix Multiplication Tasks calculating a P sub-block Extensive input data sharing, reduced memory bandwidth using Shared Memory All synched in execution 16 HK-UIUC M N P P sub BLOCK_WIDTH WIDTH BLOCK_WIDTH bx tx 01 bsize by ty bsize BLOCK_WIDTH BLOCK_SIZE WIDTH

Find Concurrency 4: Dependence Analysis - Task Ordering Identify the data required by a group of tasks before they can execute Find the task group that creates it (look upwind) Determine a temporal order that satisfy all data constraints 17 HK-UIUC

Task Ordering Example: Finite Element Analysis 18 Node List External Forces Element Internal Force Next Time Step Update position and velocity of each beam Acceleration of each deformable beam

Find Concurrency 5: Dependence Analysis - Data Sharing Data sharing can be a double-edged sword An algorithm that calls for excessive data sharing can drastically reduce advantage of parallel execution Localized sharing can improve memory bandwidth efficiency Use the execution of task groups to interleave with (mask) global memory data accesses Read-only sharing can usually be done at much higher efficiency than read-write sharing, which often requires a higher level of synchronization 19 HK-UIUC

Data Sharing Example Matrix Multiplication on the GPU Each task group will finish usage of each sub-block of N and M before moving on N and M sub-blocks loaded into Shared Memory for use by all threads of a P sub-block Amount of on-chip Shared Memory strictly limits the number of threads working on a P sub-block Read-only shared data can be efficiently accessed as Constant or Texture data (on the GPU) Frees up the shared memory for other uses 20

Find Concurrency 6: Design Evaluation Key questions to ask How many Units of Execution (UE) can be used? How are the data structures shared? Is there a lot of data dependency that leads to excessive synchronization needs? Is there enough work in each UE between synchronizations to make parallel execution worthwhile? 21 HK-UIUC

1) Find the concurrency in the problem 2) Structure the algorithm so that concurrency can be exploited 3) Implement the algorithm in a suitable programming environment 4) Execute and tune the performance of the code on a parallel system 22 Implementing a Parallel Solution to Your Problem: Key Steps

What’s Next? Focus on this for a while 23 From “Patterns for Parallel Programming”

Algorithm: Definition A step by step procedure that is guaranteed to terminate, such that each step is precisely stated and can be carried out by a computer Definiteness – the notion that each step is precisely stated Effective computability – each step can be carried out by a computer Finiteness – the procedure terminates Multiple algorithms can be used to solve the same problem Some require fewer steps and some exhibit more parallelism 24 HK-UIUC

Algorithm Structure - Strategies Task Parallelism Divide and Conquer Organize by Tasks Geometric Decomposition Recursive Data Decomposition Organize by Data Pipeline Event Condition Organize by Flow 25 HK-UIUC

Choosing Algorithm Structure Algorithm Design Organize by Task Organize by Data Organize by Data Flow LinearRecursiveLinearRecursive Task Parallelism Divide and Conquer Geometric Decomposition Recursive Data Decmp. RegularIrregular PipelineEvent Driven 26 HK-UIUC

Alg. Struct. 1: Organize by Structure Linear Parallel Tasks Common in the context of distributed memory models These are the algorithms where the work needed can be represented as a collection of decoupled or loosely coupled tasks that can be executed in parallel The tasks don’t even have to be identical Load balancing between UE can be an issue (dynamic load balancing?) If there is no data dependency involved there is no need for synchronization: the so called “embarrassingly parallel” scenario 27

Alg. Struct. 1: Organize by Structure Linear Parallel Tasks [Cntd.] Examples Imagine a car that needs to be painted: One robot paints the front left door, another one the rear left door, another one the hood, etc. The car is parceled up with a collection of UEs taking care of subtasks Other: ray tracing, a good portion of the N-body problem, Monte Carlo type simulation 28

Alg. Struct. 2: Organize by Structure Recursive Parallel Tasks (Divide & Conquer) Valid when you can break a problem into a set of decoupled or loosely coupled smaller problems, which in turn can be broken etc… This pattern is applicable when you can solve concurrently and with little synchronization the small problems (the leafs) In some case you need synchronization when dealing with this balanced tree type algorithm Often required by the merging step (assembling the result from the “subresults”) Examples: FFT, Linear Algebra problems (see FLAME project), the vector reduce operation 29

Computational ordering can have major effects on memory bank conflicts and control flow divergence. 0123…

Alg. Struct. 3: Organize by Data ~ Linear (Geometric Decomposition) ~ This is the case where the UEs gather and work around a big chunk of data with little or no synchronization This is exactly the algorithmic approach enabled best by the GPU & CUDA Examples: Matrix multiplication, matrix convolution, image processing 31

Alg. Struct. 4: Organize by Data ~ Recursive Data Scenario ~ This scenario comes into play when the data that you operate on is structured in a recursive fashion Balanced Trees Graphs Lists Sometimes you don’t even have to have the data represented as such See example with prefix scan You choose to look at data as having a balanced tree structure Problems that seem inherently sequential can be approached in this framework This is typically associated with an net increase in the amount of work you have to do Work goes up from O(n) to O(n log(n)) (see for instance Hillis and Steele algorithm) The key question is whether parallelism gained brings you ahead of the sequential alternative 32

Example: The Prefix Scan ~ Reduction Step~ 33 for k=0 to M-1 offset = 2 k for j=1 to 2 M-k-1 in parallel do x[j·2 k+1 -1] = x[j · 2 k+1 -1] + x[j · 2 k+1 -2 k -1] endfor

Example: The array reduction (the bad choice) Array elements iterations 34 HK-UIUC

Alg. Struct. 5: Organize by Data Flow ~ Regular: Pipeline ~ Tasks are parallel, but there is a high degree of synchronization (coordination) between the UEs The input of one UE is the output of an upwind UE (the “pipeline”) The time elapsed between two pipeline ticks dictated by the slowest stage of the pipeline (the bottleneck) Commonly employed by sequential chips For complex problems having a deep pipeline is attractive At each pipeline tick you output one set of results Examples: CPU: instruction fetch, decode, data fetch, instruction execution, data write are pipelined. The Cray vector architecture drawing heavily on this algorithmic structure when performing linear algebra operations 35

Alg. Struct. 6: Organize by Data Flow ~ Irregular: Event Driven Scenarios ~ The furthest away from what the GPU can support today Well supported by MIMD architectures You coordinate UEs through asynchronous events Critical aspects: Load balancing – should be dynamic Communication overhead, particularly in real-time applications Suitable for action-reaction type simulations Examples: computer games, traffic control algorithms, server operation (amazon, google) 36

1) Find the concurrency in the problem 2) Structure the algorithm so that concurrency can be exploited 3) Implement the algorithm in a suitable programming environment 4) Execute and tune the performance of the code on a parallel system 37 Implementing a Parallel Solution to Your Problem: Key Steps

What’s Comes Next? Focus on this for a while 38

Supporting Structures ~ Data and Program Models~ Fork/Join Master/Worker SPMD Program Models Loop ParallelismDistributed Array Shared Queue Shared Data Data Models Above are the models for which parallel software/hardware combos provide good support nowadays. If you don’t fall in one of the above there’ll be no sailing, you’ll have to row. 39

Data Models Shared Data All threads share a major data structure This is what CUDA and GPU computing support the best Shared Queue All threads see a “thread safe” queue Very relevant in conjunction with the Master/Worker scenarios OpenMP is very helpful here if your problem fits on one machine If not, MPI can help Distributed Array Decomposed and distributed among threads Limited support in CUDA Shared Memory but the direction where libraries are going (thrust, for instance) Good library support under MPI (this is how things get done in PETSc) OpenMP: doesn’t apply 40

Program Models Master/Worker A Master thread sets up a pool of worker threads and a bag of tasks Workers execute concurrently, removing tasks until done Common in OpenMP Loop Parallelism Loop iterations execute in parallel FORTRAN do-all (truly parallel), do-across (with dependence) Very common in OpenMP Fork/Join Most general way of creation of threads (the POSIX standard) Can be regarded as a very low level approach in which you use the OS to manage parallelism 41

Program Models SPMD (Single Program, Multiple Data) All PE’s (Processor Elements) execute the same program in parallel Each PE has its own data Each PE uses a unique ID to access its portion of data Different PE can follow different paths through the same code 42 SPMD is by far the most commonly used pattern for structuring parallel programs.

More on SPMD Dominant coding style of scalable parallel computing MPI code is mostly developed in SPMD style Almost exclusively used as the pattern in GPU computing Much OpenMP code is also in SPMD Particularly suitable for algorithms based on data parallelism, geometric decomposition, divide and conquer. Main advantage Tasks and their interactions visible in one piece of source code, no need to correlated multiple sources 43

Typical SPMD Program Phases Initialize Establish localized data structure and communication channels Obtain a unique identifier Each thread acquires a unique identifier, typically in the range from 0 to N-1, where N is the number of threads. OpenMP, MPI, and CUDA have built-in support for this Distribute Data Decompose global data into chunks and localize them, or Sharing/replicating major data structure using thread ID to associate subset of the data to threads Run the core computation More details in next slide… Finalize Reconcile global data structure, prepare for the next major iteration 44

Core Computation Phase Thread IDs are used to differentiate behavior of threads CUDA: Indx.x, Indx.y, Indx.z (also block ids) MPI: rank of a process OpenMP: get_thread_num() – gets id associated with a specific thread in a parallel region Use thread ID in loop index calculations to split loop iterations among threads Use conditions based on thread ID to branch to their specific actions 45

Algorithm Structures [in columns] vs. Program Models [in rows] Four smilies is the best (Source: Mattson, et al.) 46 Task Parallel.Divide/Conquer Geometric Decomp. Recursive DataPipelineEvent-based SPMD ☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺ Loop Parallel ☺☺☺☺☺☺☺☺☺ Master/ Worker ☺☺☺☺☺☺☺☺☺☺ Fork/ Join ☺☺☺☺☺☺☺☺☺☺☺☺

Parallel Programming Support [in columns] vs. Program Models [in rows] 47 OpenMPMPICUDA SPMD ☺☺☺☺☺☺☺ Loop Parallel ☺☺☺☺☺ Master/ Slave ☺☺☺☺☺ Fork/Join ☺☺☺

ME964 On One Slide Sequential computing hit three walls: power, memory, and ILP walls Moore’s law scales at least for one more decade Parallel computing poised to pick up where sequential computing left Moore’s law brought us powerful CPUs and GPUs Use OpenMP and/or CUDA (or OpenCL) to leverage this hardware Large problems do not always fit inside one workstation Not enough memory, or if enough memory not enough crunching number power Large problem handled well by clusters or massively parallel computers MPI helps you here When possible, use parallel libraries (thrust, PETSc, TBB, MKL, etc.) However, writing your own code for your own small problem sometimes pays off nicely 48