18-742 Spring 2011 Parallel Computer Architecture Lecture 3: Programming Models and Architectures Prof. Onur Mutlu Carnegie Mellon University © 2010 Onur.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

SE-292 High Performance Computing
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
CM-5 Massively Parallel Supercomputer ALAN MOSER Thinking Machines Corporation 1993.
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 1 Parallel Computer Models 1.2 Multiprocessors and Multicomputers.
Multiprocessors CSE 4711 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor –Although.
1 Lecture 1: Parallel Architecture Intro Course organization:  ~5 lectures based on Culler-Singh textbook  ~5 lectures based on Larus-Rajwar textbook.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
Parallel Processing Architectures Laxmi Narayan Bhuyan
Parallel Computer Architectures
Lecture 37: Chapter 7: Multiprocessors Today’s topic –Introduction to multiprocessors –Parallelism in software –Memory organization –Cache coherence 1.
Computer System Architectures Computer System Software
18-447: Computer Architecture Lecture 30B: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
1 Outline for Intro to Multiprocessing Motivation & Applications Theory, History, & A Generic Parallel Machine Programming Models –Shared Memory, Message.
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University.
Outline Why this subject? What is High Performance Computing?
1 Lecture 1: Parallel Architecture Intro Course organization:  ~18 parallel architecture lectures (based on text)  ~10 (recent) paper presentations 
Computer Organization CS224 Fall 2012 Lesson 52. Introduction  Goal: connecting multiple computers to get higher performance l Multiprocessors l Scalability,
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 May 2, 2006 Session 29.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) Introduction Copyright 2004 Daniel J. Sorin Duke University Slides.
Background Computer System Architectures Computer System Software.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
Computer Architecture Lecture 27: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015.
740: Computer Architecture Programming Models and Architectures Prof. Onur Mutlu Carnegie Mellon University.
740: Computer Architecture Memory Consistency Prof. Onur Mutlu Carnegie Mellon University.
EE 382 Processor DesignWinter 98/99Michael Flynn 1 EE382 Processor Design Winter 1998 Chapter 8 Lectures Multiprocessors, Part I.
Processor Level Parallelism 1
Fall 2012 Parallel Computer Architecture Lecture 4: Multi-Core Processors Prof. Onur Mutlu Carnegie Mellon University 9/14/2012.
COMP7330/7336 Advanced Parallel and Distributed Computing Data Parallelism Dr. Xiao Qin Auburn University
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
18-447: Computer Architecture Lecture 30B: Multiprocessors
Lecture 5 Approaches to Concurrency: The Multiprocessor
Prof. Onur Mutlu Carnegie Mellon University
CMSC 611: Advanced Computer Architecture
Computer Architecture: Parallel Processing Basics
CS5102 High Performance Computer Systems Thread-Level Parallelism
Distributed Processors
buses, crossing switch, multistage network.
Parallel Computers Definition: “A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.”
Prof. Onur Mutlu Carnegie Mellon University
Parallel Processing - introduction
Prof. Gennady Pekhimenko University of Toronto Fall 2017
Morgan Kaufmann Publishers
CMSC 611: Advanced Computer Architecture
Computer Architecture: Multithreading (I)
What is Parallel and Distributed computing?
Lecture 1: Parallel Architecture Intro
Chapter 17 Parallel Processing
ECE/CS 757: Advanced Computer Architecture II
Convergence of Parallel Architectures
Multiprocessors - Flynn’s taxonomy (1966)
15-740/ Computer Architecture Lecture 5: Precise Exceptions
CS 213: Parallel Processing Architectures
Parallel Processing Architectures
buses, crossing switch, multistage network.
AN INTRODUCTION ON PARALLEL PROCESSING
Distributed Systems CS
Distributed Systems CS
Part 2: Parallel Models (I)
CSC3050 – Computer Architecture
Chapter 4 Multiprocessors
COMPUTER ARCHITECTURES FOR PARALLEL ROCESSING
Prof. Onur Mutlu Carnegie Mellon University 9/12/2012
Presentation transcript:

Spring 2011 Parallel Computer Architecture Lecture 3: Programming Models and Architectures Prof. Onur Mutlu Carnegie Mellon University © 2010 Onur Mutlu from Adve, Chiou, Falsafi, Hill, Lebeck, Patt, Reinhardt, Smith, Singh

Announcements Project topics  Some ideas will be out  Talk with Chris Craik and me  However, you should do the literature search and come up with a concrete research project to advance the state of the art You have already read many papers in 740/741, 447 Project proposal due: Jan 31 2

Last Lecture How to do the reviews Research project and proposal Flynn’s Taxonomy Why Parallel Computers Amdahl’s Law Superlinear Speedup Some Multiprocessor Metrics Parallel Computing Bottlenecks  Serial bottleneck  Bottlenecks in the parallel portion Synchronization Load imbalance Resource contention 3

Readings for Next Lecture Required:  Hill and Marty, “Amdahl’s Law in the Multi-Core Era,” IEEE Computer  Annavaram et al., “Mitigating Amdahl’s Law Through EPI Throttling,” ISCA  Suleman et al., “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS  Ipek et al., “Core Fusion: Accommodating Software Diversity in Chip Multiprocessors,” ISCA Recommended:  Olukotun et al., “The Case for a Single-Chip Multiprocessor,” ASPLOS  Barroso et al., “Piranha: A Scalable Architecture Based on Single-Chip Multiprocessing,” ISCA  Kongetira et al., “Niagara: A 32-Way Multithreaded SPARC Processor,” IEEE Micro  Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” AFIPS

Reviews Due Friday (Jan 21) Seitz, “The Cosmic Cube,” CACM Suleman et al., “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS

What Will We Cover in This Lecture? Hill, Jouppi, Sohi, “Multiprocessors and Multicomputers,” pp , in Readings in Computer Architecture. Culler, Singh, Gupta, Chapter 1 (Introduction) in “Parallel Computer Architecture: A Hardware/Software Approach.” 6

MIMD Processing Overview 7

MIMD Processing Loosely coupled multiprocessors  No shared global memory address space  Multicomputer network Network-based multiprocessors  Usually programmed via message passing Explicit calls (send, receive) for communication Tightly coupled multiprocessors  Shared global memory address space  Traditional multiprocessing: symmetric multiprocessing (SMP)  Existing multi-core processors, multithreaded processors  Programming model similar to uniprocessors except (multitasking uniprocessor) Operations on shared data require synchronization 8

Main Issues in Tightly-Coupled MP Shared memory synchronization  Locks, atomic operations, transactional memory Cache consistency  More commonly called cache coherence Ordering of memory operations  From different processors: called memory consistency model Resource sharing and partitioning Communication: Interconnection networks Load imbalance 9

Multithreading Coarse grained  Quantum based  Event based Fine grained  Cycle by cycle  Thornton, “CDC 6600: Design of a Computer,”  Burton Smith, “A pipelined, shared resource MIMD computer,” ICPP Simultaneous  Can dispatch instructions from multiple threads at the same time  Good for improving execution unit utilization 10

Programming Models vs. Architectures 11

Programming Models vs. Architectures Five major models  Shared memory  Message passing  Data parallel (SIMD)  Dataflow  Systolic Hybrid models? 12

Shared Memory vs. Message Passing Are these programming models or execution models supported by the hardware architecture? Does a multiprocessor that is programmed by “shared memory programming model” have to support shared address space processors? Does a multiprocessor that is programmed by “message passing programming model” have to have no shared address space between processors? 13

Programming Models: Message Passing vs. Shared Memory Difference: how communication is achieved between tasks Message passing programming model  Explicit communication via messages  Analogy: telephone call or letter, no shared location accessible to all Shared memory programming model  Implicit communication via memory operations (load/store)  Analogy: bulletin board, post information at a shared space Suitability of the programming model depends on the problem to be solved. Issues affected by the model include:  Overhead, scalability, ease of programming, bugs, match to underlying hardware, … 14

Message Passing vs. Shared Memory Hardware Difference: how task communication is supported in hardware Shared memory hardware (or machine model)  All processors see a global shared address space Ability to access all memory from each processor  A write to a location are visible to the reads of other processors Message passing hardware (machine model)  No shared memory  Send and receive variants are the only method of communication between processors (much like networks of workstations today, i.e. clusters) Suitability of the hardware depends on the problem to be solved as well as the programming model. 15

Message Passing vs. Shared Memory Hardware P M IO P M P M I/O (Network) Message Passing P M IO P M P M Memory Shared Memory P M IO P M P M Processor (Dataflow/Systolic), Single-Instruction Multiple-Data (SIMD) ==> Data Parallel Join At: Program With:

Programming Model vs. Hardware Most of parallel computing history, there was no separation between programming model and hardware  Message passing: Caltech Cosmic Cube, Intel Hypercube, Intel Paragon  Shared memory: CMU C.mmp, Sequent Balance, SGI Origin.  SIMD: ILLIAC IV, CM-1 However, any hardware can really support any programming model Why?  Application  compiler/library  OS services  hardware 17

Layers of Abstraction Compiler/library/OS map the communication abstraction at the programming model layer to the communication primitives available at the hardware layer 18

Programming Model vs. Architecture Machine  Programming Model  Join at network, so program with message passing model  Join at memory, so program with shared memory model  Join at processor, so program with SIMD or data parallel Programming Model  Machine  Message-passing programs on message-passing machine  Shared-memory programs on shared-memory machine  SIMD/data-parallel programs on SIMD/data-parallel machine Isn’t hardware basically the same?  Processors, memory, interconnect (I/O)  Why not have generic parallel machine and program with model that fits the problem? 19

A Generic Parallel Machine Separation of programming models from architectures All models require communication Node with processor(s), memory, communication assist Interconnect CA Mem P $ CA Mem P $ CA Mem P $ CA Mem P $ Node 0Node 1 Node 2 Node 3

Simple Problem for i = 1 to N A[i] = (A[i] + B[i]) * C[i] sum = sum + A[i] How do I make this parallel?

Simple Problem for i = 1 to N A[i] = (A[i] + B[i]) * C[i] sum = sum + A[i] Split the loops  Independent iterations for i = 1 to N A[i] = (A[i] + B[i]) * C[i] for i = 1 to N sum = sum + A[i] Data flow graph?

Data Flow Graph A[0] B[0] + C[0] * + A[1] B[1] + C[1] * A[2] B[2] + C[2] * + A[3] B[3] + C[3] * N-1 cycles to execute on N processors what assumptions?

Partitioning of Data Flow Graph A[0] B[0] + C[0] * + A[1] B[1] + C[1] * A[2] B[2] + C[2] * + A[3] B[3] + C[3] * + global synch

Shared (Physical) Memory Communication, sharing, and synchronization with store / load on shared variables Must map virtual pages to physical page frames Consider OS support for good mapping PnPn P0P0 load store Private Portion of Address Space Shared Portion of Address Space Common Physical Addresses Pn Private P0 Private P1 Private P2 Private Machine Physical Address Space

Shared (Physical) Memory on Generic MP Interconnect CA Mem P $ CA Mem P $ CA Mem P $ CA Mem P $ Node 0 0,N-1 (Addresses) Node 1 N,2N-1 Node 2 2N,3N-1 Node 3 3N,4N-1 Keep private data and frequently used shared data on same node as computation

Return of The Simple Problem private int i, my_start, my_end, mynode; shared float A[N], B[N], C[N], sum; for i = my_start to my_end A[i] = (A[i] + B[i]) * C[i] GLOBAL_SYNCH; if (mynode == 0) for i = 1 to N sum = sum + A[i] Can run this on any shared memory machine

Message Passing Architectures Cannot directly access memory on another node IBM SP-2, Intel Paragon Cluster of workstations Interconnect CA Mem P $ CA Mem P $ Node 0 0,N-1 Node 1 0,N-1 Node 2 0,N-1 Node 3 0,N-1 CA Mem P $ CA Mem P $

Message Passing Programming Model User level send/receive abstraction  local buffer (x,y), process (Q,P) and tag (t)  naming and synchronization Local Process Address Space address x address y match Process P Process Q Local Process Address Space Send x, Q, t Recv y, P, t

The Simple Problem Again int i, my_start, my_end, mynode; float A[N/P], B[N/P], C[N/P], sum; for i = 1 to N/P A[i] = (A[i] + B[i]) * C[i] sum = sum + A[i] if (mynode != 0) send (sum,0); if (mynode == 0) for i = 1 to P-1 recv(tmp,i) sum = sum + tmp Send/Recv communicates and synchronizes P processors

Separation of Architecture from Model At the lowest level shared memory model is all about sending and receiving messages  HW is specialized to expedite read/write messages using load and store instructions What programming model/abstraction is supported at user level? Can I have shared-memory abstraction on message passing HW? How efficient? Can I have message passing abstraction on shared memory HW? How efficient?

Challenges in Mixing and Matching Assume prog. model same as ABI (compiler/library  OS  hardware) Shared memory prog model on shared memory HW  How do you design a scalable runtime system/OS? Message passing prog model on message passing HW  How do you get good messaging performance? Shared memory prog model on message passing HW  How do you reduce the cost of messaging when there are frequent operations on shared data?  Li and Hudak, “Memory Coherence in Shared Virtual Memory Systems,” ACM TOCS Message passing prog model on shared memory HW  Convert send/receives to load/stores on shared buffers  How do you design scalable HW? 32

Data Parallel Programming Model Programming Model  Operations are performed on each element of a large (regular) data structure (array, vector, matrix)  Program is logically a single thread of control, carrying out a sequence of either sequential or parallel steps The Simple Problem Strikes Back A = (A + B) * C sum = global_sum (A) Language supports array assignment

Data Parallel Hardware Architectures (I) Early architectures directly mirrored programming model Single control processor (broadcast each instruction to an array/grid of processing elements)  Consolidates control Many processing elements controlled by the master Examples: Connection Machine, MPP  Batcher, “Architecture of a massively parallel processor,” ISCA K bit serial processing elements  Tucker and Robertson, “Architecture and Applications of the Connection Machine,” IEEE Computer K bit serial processing elements 34

Connection Machine 35

Data Parallel Hardware Architectures (II) Later data parallel architectures  Higher integration  SIMD units on chip along with caches  More generic multiple cooperating multiprocessors with vector units  Specialized hardware support for global synchronization E.g. barrier synchronization Example: Connection Machine 5  Hillis and Tucker, “The CM-5 Connection Machine: a scalable supercomputer,” CACM  Consists of 32-bit SPARC processors  Supports Message Passing and Data Parallel models  Special control network for global synchronization 36

Review: Separation of Model and Architecture Shared Memory  Single shared address space  Communicate, synchronize using load / store  Can support message passing Message Passing  Send / Receive  Communication + synchronization  Can support shared memory Data Parallel  Lock-step execution on regular data structures  Often requires global operations (sum, max, min...)  Can be supported on either SM or MP

Review: A Generic Parallel Machine Separation of programming models from architectures All models require communication Node with processor(s), memory, communication assist Interconnect CA Mem P $ CA Mem P $ CA Mem P $ CA Mem P $ Node 0Node 1 Node 2 Node 3

Data Flow Programming Models and Architectures Program consists of data flow nodes A data flow node fires (fetched and executed) when all its inputs are ready  i.e. when all inputs have tokens No artificial constraints, like sequencing instructions How do we know when operands are ready?  Matching store for operands (remember OoO execution?)  large associative search! Later machines moved to coarser grained dataflow (threads + dataflow within a thread)  allowed registers and cache for local computation  introduced messages (with operations and operands) 39