Lectures on Parallel and Distributed Computing

Slides:



Advertisements
Similar presentations
Dr. Wei Chen ( ), Professor Tennessee State University Lectures on Parallel and Distributed Computing.
Advertisements

Today’s topics Single processors and the Memory Hierarchy
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.
Advanced Topics in Algorithms and Data Structures An overview of the lecture 2 Models of parallel computation Characteristics of SIMD models Design issue.

Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500 Cluster.
Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Introduction to Parallel Processing Ch. 12, Pg
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
Course Outline Introduction in software and applications. Parallel machines and architectures –Overview of parallel machines –Cluster computers (Myrinet)
Parallel Computing Basic Concepts Computational Models Synchronous vs. Asynchronous The Flynn Taxonomy Shared versus Distributed Memory Interconnection.
CS668- Lecture 2 - Sept. 30 Today’s topics Parallel Architectures (Chapter 2) Memory Hierarchy Busses and Switched Networks Interconnection Network Topologies.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Department of Computer Science University of the West Indies.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Multi-core.  What is parallel programming ?  Classification of parallel architectures  Dimension of instruction  Dimension of data  Memory models.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500, clusters,
Flynn’s Architecture. SISD (single instruction and single data stream) SIMD (single instruction and multiple data streams) MISD (Multiple instructions.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
Parallel Computing.
Outline Why this subject? What is High Performance Computing?
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Lecture 3: Computer Architectures
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 February Session 9.
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Processor Level Parallelism 1
Why Parallel/Distributed Computing Sushil K. Prasad
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Overview Parallel Processing Pipelining
Parallel Architecture
Distributed and Parallel Processing
Multiprocessor Systems
buses, crossing switch, multistage network.
Parallel Computers Definition: “A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.”
Course Outline Introduction in algorithms and applications
CS 147 – Parallel Processing
Super Computing By RIsaj t r S3 ece, roll 50.
Parallel Computer Architectures Duncan A. Buell
What is Parallel and Distributed computing?
Lecture 7: Introduction to Distributed Computing.
Data Structures and Algorithms in Parallel Computing
Parallel Architectures Based on Parallel Computing, M. J. Quinn
Different Architectures
Chapter 17 Parallel Processing
Outline Interconnection networks Processor arrays Multiprocessors
Parallel Processing Architectures
Constructing a system with multiple computers or processors
buses, crossing switch, multistage network.
AN INTRODUCTION ON PARALLEL PROCESSING
High Performance Computing & Bioinformatics Part 2 Dr. Imad Mahgoub
Advanced Computer Architecture 5MD00 / 5Z032 Multi-Processing 2
Hybrid Programming with OpenMP and MPI
Part 2: Parallel Models (I)
Chapter 4 Multiprocessors
Multicore and GPU Programming
Types of Parallel Computers
Multicore and GPU Programming
Presentation transcript:

Lectures on Parallel and Distributed Computing Dr. Wei Chen (陈慰), Professor Tennessee State University 2017年6月at法政大学

Outline Lecture 1: Introduction to parallel computing Lecture 2: Parallel computational models Lecture 3: Parallel algorithm design and analysis Lecture 4: Distributed-memory programming with PVM/MPI Lecture 5: Shared-memory programming with Open MP Lecture 6: Shared-memory programming with GPU Lecture 7: Introduction to distributed systems Lecture 8: Synchronous network algorithms Lecture 9: Asynchronous shared-memory/network algorithms Lecture 10: Application I Lecture 11: Application II

Reference: (1) Lecture 1 & 2 & 3: Joseph Jaja, “Introduction to Parallel Algorithms,” Addison Wesley, 1992. (2) Lecture 4 & 5 & 6: Peter S. Pacheco: An Introduction to Parallel Programming, Morgan Kaufmann Publishers, 2011. (3) Lecture 7 & 8: Nancy A. Lynch, “Distributed Algorithms,” Morgan Kaufmann Publishers, 1996.

Lecture1: Introduction to Parallel Computing

Why Parallel Computing Problems with large computing complexity  Computing hard problems (NP-complete problems) exponential computing time. Problems with large scale of input size quantum chemistry, statistic mechanics, relative physics, universal physics, fluid mechanics, biology, genetics engineering, … For example, it costs 1 hour using the current computer to simulate the procedure of 1 second reaction of protein molecule and water molecule. It costs

Why parallel Computing Physical limitation of CPU computational power  In past 50 years, CPU was speeded up double every 2.5 years. But, there are a physical limitation. Light speed is Therefore, the limitation of the number of CPU clocks is expected to be about 10GHz. To solve computing hard problems  Parallel processing  DNA computer  Quantum computer …

What is parallel computing Using a number of processors to process one task Speeding up the processing by distributing it to the processors One problem

Classification of parallel computers Two kinds of classification 1.Flann’s Classification  SISD (Single Instruction stream, Single Data stream)  MISD (Multiple Instruction stream, Single Data stream)  SIMD (Single Instruction stream, Multiple Data stream)  MIMD (Multiple Instruction stream, Multiple Data stream) 2. Classification by memory status Share memory Distributed memory 

Flynn’s classification(1) SISD (Single Instruction Single Data) computer Von Neuman’s one processor computer Control Processor Memory Instruction Data Stream Stream

Flynn’s classification (2) MISD (Multi Instructions Single Data) computer  All processors share a common memory, have their own control devices and execute their own instructions on same data. Control Processor Instruction Stream Control Processor Instruction Memory Stream Data Stream Control Processor Instruction Stream

Flynn’s classification (3) SIMD (Single Instructions Multi Data) computer Processors execute the same instructions on different data Operations of processors are synchronized by global clock. Shared Memory or Inter- connection Nerwork Processor Data Stream Control Instruction

Flynn’s classification (4) MIMD (Multi Instruction Multi Data) Computer Processors have their own control devices, and execute different instructions on different data. Operations of processors are executed asynchronously in most time. It is also called as distributed computing system. Shared Memory or Inter- connection Nerwork Processor Data Stream Control Instruction

Classification by memory types (1) 1. Parallel computer with a shared common memory  Communication based on shared memory For example, consider the case that processor i sends some data to processor j. First, processor i writes the data to the share memory, then processor j reads the data from the same address of the share memory. Shared Memory Processor

Classification by memory types (2) Features of parallel computers with shared common memory  Programming is easy. Exclusive control is necessary for the access to the same memory cell.   Realization is difficult when the number of processors is large. (Reason) The number of processors connected to shared memory is limited by the physical factors such as the size and voltage of units, and the latency caused in memory accessing.

Classification by memory tapes (3) 2. Parallel computers with distributed memory Communication style is one to one based on interconnection network. For example, consider the case that processor i sends data to processor j. First, processor i issues a send command such as “processor j sends xxx to process i”, then processor j gets the data by a receiving command.

Classification by memory types (4) Features of parallel computers using distributed memory There are various architectures of interconnection networks (Generally, the degree of connectivity is not large.) Programming is difficult since comparing with shared common memory the communication style is one to one. It is easy to increase the number of processors.

Types of parallel computers with distributed memory Complete connection type Any two processors are connected Features Strong communication ability, but not practical (each processor has to be connected to many processors). Mash connection type Processors are connected as a two-dimension lattice. - Connected to few processors. Easily to increase the number of processors. Large distance between processors:  Existence of processor communication bottleneck.

Types of parallel computers with distributed memory Hypercube connection type Processors connected as a hypercube (each processor has a binary number. (Processors are connected if only if one bit of their number are different.) Features Small distance between processors: log n. Balanced communication load because of its symmetric structure. Easy to increase the number of processors.

Types of parallel computers with distributed memory Other connected type Tree connection type, butterfly connection type, bus connection type. Criterion for selecting an inter-connection network  Small diameter (the largest distance between processors) for small communication delay.  Symmetric structure for easily increasing the number of processors. The type of inter-connection network depends on application, ability of processors, upper bound of the number of processors and other factors.

Real parallel processing system (1) Early days parallel computer (ILLIAC IV)  Built in 1972  SIMD type with distributed memory, consisting of 64 processors Transformed mash connection type, equipped with common data bus, common control bus, and one control Unit.

Real parallel processing system (2) Parallel computers in recent 1990s  Shared common memory type Workstation, SGI Origin2000 and other with 2-8 processors.  Distributed memory type Name Maker Processor num Processing type Network type CM-2 TM 65536 SIMD hypercube CM-5 1024 MIMD fat tree nCUBE2 NCUBE 8192 iWarp CMU, Intel 64 2D torus Paragon Intel 4096 SP-2 IBM 512 HP switch AP1000 Fujitsu SR2201 Hitachi crossbar

Real parallel processing system (3) Deep blue  Developed by IBM for chess game only  Defeating the chess champion  Based on general parallel computer SP-2 RS/6000 node memory (32 nodes) Inter-connection network (Generalized hypercube) VLSI Chess Processor memory RS/6000 Bus Interface Microchannel Bus Deep blue chip (8)

K (京)Computer - Fujitsu Architecture: 88,128 SPARC64 VIIIfx 2.0 GHz 8 cores processors, 864 cabinets of each with 96 computing nodes and 6 I/O nodes, 6 dimension Tofu/torus interconnect, Linux-based enhanced operating system, open-source Open MPI libaray,12.6 MW, 10.51 petaflops, ranked as # 3 in 2011.

Titan –Oak Ridge Lab Architecture: 18,688 AMD Opteron 6247 16-cores CPUs, Cray Linux, 8.2 MW, 17.59 petaflops, GPU based, Torus topology, ranked #1 in 2012.

天河一号 – 国家计算中心,天津 Architecture: 14,336 Xeon X5670 processors, 7168 Nvidia Tesla M2050 GPUs, 2048 FeiTeng 1000 SPARC-based processors, 4.7 petaflops. 112 computer cabinets, 8 I/O cabinets, 11D hypercube topology with IB QDR/DDR,Linux, #2 in 2011

Exercises 1. Give more details for how Deep Blue works (1 pages) 2. Compare top three supercomputers in the world in terms of the number of processors, architectures, speed and any other qualifications that you can think (1-2 pages).