Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang

Slides:



Advertisements
Similar presentations
Distributed Systems CS
Advertisements

SE-292 High Performance Computing
Introductions to Parallel Programming Using OpenMP
Class CS 775/875, Spring 2011 Amit H. Kumar, OCCS Old Dominion University.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
Reference: Message Passing Fundamentals.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
3.5 Interprocess Communication
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Introduction to Parallel Processing Ch. 12, Pg
Designing Parallel Programs David Rodriguez-Velazquez CS-6260 Spring-2009 Dr. Elise de Doncker.
Flynn’s Taxonomy of Computer Architectures Source: Wikipedia Michael Flynn 1966 CMPS 5433 – Parallel Processing.
CS 470/570:Introduction to Parallel and Distributed Computing.
Parallel Architectures
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
HPC Technology Track: Foundations of Computational Science Lecture 2 Dr. Greg Wettstein, Ph.D. Research Support Group Leader Division of Information Technology.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
The Truth About Parallel Computing: Fantasy versus Reality William M. Jones, PhD Computer Science Department Coastal Carolina University.
Parallel Computing.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Parallel Computing Presented by Justin Reschke
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
USEIMPROVEEVANGELIZE High-Performance Computing and OpenSolaris ● Silveira Neto ● Sun Campus Ambassador ● Federal University of Ceará ● ParGO - Paralellism,
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
These slides are based on the book:
CIT 668: System Architecture
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Auburn University
Introduction to Parallel Processing
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Parallel Programming pt.1
Introduction to Parallel Processing
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to Parallel Computing
PARALLEL COMPUTING Submitted By : P. Nagalakshmi
PARALLEL COMPUTING.
Distributed and Parallel Processing
CMSC 611: Advanced Computer Architecture
Introduction to parallel programming
Distributed Shared Memory
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
Parallel Programming By J. H. Wang May 2, 2017.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Parallel Odd-Even Sort Algorithm Dr. Xiao.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Data Partition Dr. Xiao Qin Auburn University.
CS 147 – Parallel Processing
The University of Adelaide, School of Computer Science
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
Multi-Processing in High Performance Computer Architecture:
Chapter 4: Threads.
Chapter 4: Threads.
AN INTRODUCTION ON PARALLEL PROCESSING
Designing Parallel Programs
By Brandon, Ben, and Lee Parallel Computing.
Introduction to parallelism and the Message Passing Interface
Chapter 4: Threads & Concurrency
Chapter 4 Multiprocessors
Introduction, background, jargon
Chapter 01: Introduction
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang Parallel Computing Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang

Topics What is Parallel Computing? Why use Parallel Computing ? Concepts and Terminologies Parallel Computer Memory Architectures Parallel Programming Models Designing Parallel Programs Parallel Examples

What is Parallel Computing Serial Computing: A problem is broken into a discrete series of instructions Instructions are executed sequentially one after another Executed on a single processor Only one instruction may execute at any moment in time Parallel Computing: A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different processors An overall control/coordination mechanism is employed

Parallel Computing The compute resources are typically: The computational problem should be able to:

Why use Parallel Computing

Concepts and Terminology Flynn's Classical Taxonomy Von Neumann Architecture where P = parallel fraction, N = number of processors and S = serial fraction Amdahl's Law:

Parallel Computer Memory Architectures Shared Memory Uniform Memory Access (UMA) Non-Uniform Memory Access (NUMA) Distributed Memory

Parallel Computer Memory Architectures Hybrid Distributed-Shared Memory

Parallel Programming Models Shared Memory Model (without threads) Processes/tasks share a common address space, which they read and write to asynchronously Locks / semaphores are used to control access to the shared memory Implementation UNIX

Parallel Programming Models Threads Model Type of shared memory programming. A single "heavy weight" process can have multiple "light weight", concurrent execution Comprises of a library of subroutines that are called from within parallel source code A set of compiler directives embedded in either serial or parallel source code Implementation POSIX Threads and OpenMP.

Parallel Programming Models Distributed Memory / Message Passing Model Multiple tasks can reside on the same physical machine and/or across an arbitrary number of machines. Tasks exchange data through communications by sending and receiving Data transfer usually requires cooperative operations to be performed by each process Implementation Message Passing Interface (MPI)

Parallel Programming Models Data Parallel Model Address space is treated globally Parallel work focuses on performing operations on a data set A set of tasks work collectively on the same data structure Tasks perform the same operation on their partition of work Implementation Coarray Fortran, Unified Parallel C (UPC), Chapel

Parallel Programming Models Hybrid Model A combination of the message passing model (MPI) with the threads model (OpenMP) Threads perform computationally intensive kernels using local, on-node data Communications between processes on different nodes occurs over the network using MPI

Parallel Programming Models SPMD- Single Program Multiple Data "High level" programming model combination of the previously models MPMD- Multiple Program Multiple Data

Designing Parallel Programs

How to make serial programs parallel Parallelization Partitioning Automatic: a program parallelizes a program as it can Manual: the programmer denotes where and how they want parallelization to occur Data dependency between tasks determine what can or cannot be parallelized and in what order parallel tasks can be done. Determines what work and in what order processing units handle parallel programs. Cyclic: cycle through discrete chunks of work that split up each problem and when a chunk is complete, the processing unit moves onto the next chunk Block: Each task is handled by its processing unit and is held and worked on until the task is complete

Parallel Examples

In depth look: Array Processing Say you want to perform a function on every element of an array Could do every array element in order, but could also parallelize the task. Block form: divide the array elements into groups depending on how many processing units are available. Then, have each processing unit work on its chunk of work until completion. (array elements)/(# of processing units) = chunk size Cyclic form: have each array element be a discrete unit. Each processing unit will compute a result for an element, then move on to the next element of the array that is not currently being worked on by another processing unit. This example is embarrassingly parallel, which means it has little to no data dependency, which is a huge roadblock for designing most applicable parallel programs. Timing of completing functions and keeping relevant programs running on the same processing unit are considerations in examples that are not embarassingly parallel Also, complex parallel programs need to consider granularity, which is the ratio of computing work to communicating work. Programs with a lot of data dependency need more time for communication leading to a higher granularity.

PI Calculation

Simple Heat Equation

References Designing and Building Parallel Programs". Ian Foster. http://www.mcs.anl.gov/~itf/dbpp/ "Introduction to Parallel Computing". Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar. http://www-users.cs.umn.edu/~karypis/parbook/ "Overview of Recent Supercomputers". A.J. van der Steen, Jack Dongarra. OverviewRecentSupercomputers.2008.pdf

Questions ??