TM Parallel Concepts An introduction. TM The Goal of Parallelization Reduction of elapsed time of a program Reduction in turnaround time of jobs Overhead:

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

Parallel Processing with OpenMP
Introduction to Openmp & openACC
Distributed Systems CS
SE-292 High Performance Computing
Lecture 6: Multicore Systems
Starting Parallel Algorithm Design David Monismith Based on notes from Introduction to Parallel Programming 2 nd Edition by Grama, Gupta, Karypis, and.
May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
Potential for parallel computers/parallel programming
Parallel System Performance CS 524 – High-Performance Computing.
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
An Introduction To PARALLEL PROGRAMMING Ing. Andrea Marongiu
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
1 Lecture 4 Analytical Modeling of Parallel Programs Parallel Computing Fall 2008.
Clusters Part 2 - Hardware Lars Lundberg The slides in this presentation cover Part 2 (Chapters 5-7) in Pfister’s book.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming with MPI and OpenMP Michael J. Quinn.
Experiencing Cluster Computing Class 1. Introduction to Parallelism.
Lecture 5 Today’s Topics and Learning Objectives Quinn Chapter 7 Predict performance of parallel programs Understand barriers to higher performance.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
Chapter 9. Concepts in Parallelisation An Introduction
Parallel System Performance CS 524 – High-Performance Computing.
Lecture 37: Chapter 7: Multiprocessors Today’s topic –Introduction to multiprocessors –Parallelism in software –Memory organization –Cache coherence 1.
Fundamental Issues in Parallel and Distributed Computing Assaf Schuster, Computer Science, Technion.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
Rechen- und Kommunikationszentrum (RZ) Parallelization at a Glance Christian Terboven / Aachen, Germany Stand: Version 2.3.
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Chapter 4 Performance. Times User CPU time – Time that the CPU is executing the program System CPU time – time the CPU is executing OS routines for the.
Lecture 3 – Parallel Performance Theory - 1 Parallel Performance Theory - 1 Parallel Computing CIS 410/510 Department of Computer and Information Science.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Performance Evaluation of Parallel Processing. Why Performance?
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
1 Introduction to Parallel Computing. 2 Multiprocessor Architectures Message-Passing Architectures –Separate address space for each processor. –Processors.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note Introduction to Parallel Computing (Blaise Barney,
Hybrid MPI and OpenMP Parallel Programming
Lecture 9 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Parallel Programming with MPI and OpenMP
Introduction to OpenMP Eric Aubanel Advanced Computational Research Laboratory Faculty of Computer Science, UNB Fredericton, New Brunswick.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Lecture 27 Multiprocessor Scheduling. Last lecture: VMM Two old problems: CPU virtualization and memory virtualization I/O virtualization Today Issues.
Parallel Computing Presented by Justin Reschke
CPE779: Shared Memory and OpenMP Based on slides by Laxmikant V. Kale and David Padua of the University of Illinois.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
1 Parallel Processing Fundamental Concepts. 2 Selection of an Application for Parallelization Can use parallel computation for 2 things: –Speed up an.
Potential for parallel computers/parallel programming
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
4- Performance Analysis of Parallel Programs
SHARED MEMORY PROGRAMMING WITH OpenMP
What Exactly is Parallel Processing?
Chapter 3: Principles of Scalable Performance
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Wed. Jan. 31, 2001 *Parts.
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson *Parts of this lecture.
Distributed Systems CS
Distributed Systems CS
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Hybrid MPI and OpenMP Parallel Programming
Potential for parallel computers/parallel programming
Potential for parallel computers/parallel programming
Complexity Measures for Parallel Computation
Potential for parallel computers/parallel programming
Potential for parallel computers/parallel programming
Presentation transcript:

TM Parallel Concepts An introduction

TM The Goal of Parallelization Reduction of elapsed time of a program Reduction in turnaround time of jobs Overhead: –total increase in cpu time –communication –synchronization –additional work in algorithm –non-parallel part of the program (one processor works, others spin idle) Overhead vs Elapsed time is better expressed as Speedup and Efficiency Elapsed time 1 processor 4 processors start finish Elapsed time cpu time communication overhead 1 processor 2 procs 4 procs 8 procs Reduction in elapsed time

TM Speedup and Efficiency Both measure the parallelization properties of a program Let T(p) be the elapsed time on p processors The Speedup S(p) and the Efficiency E(p) are defined as: for ideal parallel speedup we get: Scalable programs remain efficient for large number of processors S(p) = T(1)/T(p) E(p) = S(p)/p T(p) = T(1)/p S(p) = T(1)/T(p) = p E(p) = S(p)/p = 1 or 100% Speedup Number of processors ideal Super-linear Saturation Disaster Efficiency Number of processors 1

TM Amdahl’s Law This rule states the following for parallel programs: the non-parallel (serial) fraction s of the program includes the communication and synchronization overhead thus the maximum parallel Speedup S(p) for a program that has parallel fraction f: The non-parallel fraction of the code (I.e. overhead) imposes the upper limit on the scalability of the code (1) 1 = s + f ! program has serial and parallel fractions (2)T(1) = T(parallel) + T(serial) = T(1) *(f + s) = T(1) *(f + (1-f)) (3)T(p)= T(1) *(f/p + (1-f)) (4)S(p)= T(1)/T(p) = 1/(f/p + 1-f) inf. (5) S(p) < 1/(1-f)

TM Amdahl’s Law: Time to Solution Hypothetical program run time as function of #processors for several parallel fractions f. Note the log-log plot T(p) = T(1)/S(p) S(p) = 1/(f/p + (1-f))

TM Fine-Grained Vs Coarse-Grained Fine-grain parallelism (typically loop level)Fine-grain parallelism (typically loop level) –can be done incrementally, one loop at a time –does not require deep knowledge of the code –a lot of loops have to be parallel for decent speedup –potentially many synchronization points (at the end of each parallel loop) Coarse-grain parallelismCoarse-grain parallelism –make larger loops parallel at higher call-tree level potentially in-closing many small loops –more code is parallel at once –fewer synchronization points, reducing overhead –requires deeper knowledge of the code MAIN A B C D F E G H I J KLM NO pq rs t Coarse-grained Fine-grained

TM Other Impediments to Scalability Load imbalance: Load imbalance: the time to complete a parallel execution of a code segment is determined by the longest running thread unequal work load distribution leads to some processors being idle, while others work too much with coarse grain parallelization, more opportunities for load imbalance exist Too many synchronization points Too many synchronization points compiler will put synchronization points at the start and exit of each parallel region if too many small loops have been made parallel, synchronization overhead will compromise scalability. Elapsed time p0 p1 p2 p3 startfinish

TM Computing  with DPL Notes: –essentially sequential form –automatic detection of parallelism –automatic work sharing –all variables shared by default –number of processors specified outside of the code compile with: f90 -apo -O3 -mips4 -mplist –the mplist switch will show the intermediate representation  = =  (1+x 2 ) dx 0<i<N 4 N(1+((i+0.5)/N) 2 ) PROGRAM PIPROG INTEGER, PARAMETER:: N = REAL (KIND=8):: LS,PI, W = 1.0/N PI = SUM( (/ (4.0*W/(1.0+((I+0.5)*W)**2),I=1,N) /) ) PRINT *, PI END

TM Computing  with Shared Memory Notes: –essentially sequential form –automatic work sharing –all variables shared by default –directives to request parallel work distribution –number of processors specified outside of the code  = =  (1+x 2 ) dx 0<i<N 4 N(1+((i+0.5)/N) 2 ) #define n main() { double pi, l, ls = 0.0, w = 1.0/n; int i; #pragma omp parallel for private(i,l) reduction(+:ls) for(i=0; i<n; i++) { l = (i+0.5)*w; ls += 4.0/(1.0+l*l); } printf(“pi is %f\n”,ls*w); }

TM Computing  with Message Passing Notes: –thread identification first –explicit work sharing –all variables are private –explicit data exchange (reduce) –all code is parallel –number of processors is specified outside of code  = =  (1+x 2 ) dx 0<i<N 4 N(1+((i+0.5)/N) 2 ) #include #define N main() { double pi, l, ls = 0.0, w = 1.0/N; int i, mid, nth; MPI_init(&argc, &argv); MPI_comm_rank(MPI_COMM_WORLD,&mid); MPI_comm_size(MPI_COMM_WORLD,&nth); for(i=mid; i<N; i += nth) { l = (i+0.5)*w; ls += 4.0/(1.0+l*l); } MPI_reduce(&ls,&pi,1,MPI_DOUBLE,MPI_SUM,0,MPI_COMM_WORLD); if(mid == 0) printf(“pi is %f\n”,pi*w); MPI_finalize(); }

TM Comparing Parallel Paradigms Automatic parallelization combined with explicit Shared Variable programming (compiler directives) used on machines with global memory Automatic parallelization combined with explicit Shared Variable programming (compiler directives) used on machines with global memory –Symmetric Multi-Processors, CC-NUMA, PVP –These methods collectively known as Shared Memory Programming (SMP) –SMP programming model works at loop level, and coarse level parallelism: the coarse level parallelism has to be specified explicitly loop level parallelism can be found by the compiler (implicitly) –Explicit Message Passing Methods are necessary with machines that have no global memory addressability: clusters of all sort, NOW & COW –Message Passing Methods require coarse level parallelism to be scalable Choosing programming model is largely a matter of the application, personal preference and the target machine. it has nothing to do with scalability. Scalability limitations: –communication overhead –process synchronization scalability is mainly a function of the hardware and (your) implementation of the parallelism

TMSummary The serial part or the communication overhead of the code limits the scalability of the code (Amdahl Law) programs have to be >99% parallel to use large (>30 proc) machines several Programming Models are in use today: –Shared Memory programming (SMP) (with Automatic Compiler parallelization, Data-Parallel and explicit Shared Memory models) –Message Passing model Choosing a Programming Model is largely a matter of the application, personal choice and target machine. It has nothing to do with scalability. –Don’t confuse Algorithm and implementation machines with a global address space can run applications based on both, SMP and Message Passing programming models