1 MIMD Computers Module 4. 2 PMS Notation (Bell & Newell, 1987) Similar to a block notation, except using single letters Can augment letter with ( ) containing.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
SE-292 High Performance Computing
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Lecture 6: Multicore Systems
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
Today’s topics Single processors and the Memory Hierarchy
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Multiple Processor Systems
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 1 Parallel Computer Models 1.2 Multiprocessors and Multicomputers.
History of Distributed Systems Joseph Cordina
2. Multiprocessors Main Structures 2.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
1 Multiprocessors. 2 Idea: create powerful computers by connecting many smaller ones good news: works for timesharing (better than supercomputer) bad.
Distributed Hardware How are computers interconnected ? –via a bus-based –via a switch How are processors and memories interconnected ? –Private –shared.

An Introduction to Parallel Computing Dr. David Cronk Innovative Computing Lab University of Tennessee Distribution A: Approved for public release; distribution.
Chapter 17 Parallel Processing.
Lecture 10 Outline Material from Chapter 2 Interconnection networks Processor arrays Multiprocessors Multicomputers Flynn’s taxonomy.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
Parallel Computers 1 MIMD COMPUTERS OR MULTIPROCESSORS References: –[8] Jordan and Alaghaband, Fundamentals of Parallel Algorithms, Architectures, Languages,
4. Multiprocessors Main Structures 4.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
Introduction to Parallel Processing Ch. 12, Pg
Computer System Architectures Computer System Software
Introduction to Interconnection Networks. Introduction to Interconnection network Digital systems(DS) are pervasive in modern society. Digital computers.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
CS668- Lecture 2 - Sept. 30 Today’s topics Parallel Architectures (Chapter 2) Memory Hierarchy Busses and Switched Networks Interconnection Network Topologies.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Multiprocessor systems Objective n the multiprocessors’ organization and implementation n the shared-memory in multiprocessor n static and dynamic connection.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
09/01/2011CS4961 CS4961 Parallel Programming Lecture 4: Memory Systems and Interconnects Mary Hall September 1,
August 15, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 12: Multiprocessors: Non-Uniform Memory Access * Jeremy R. Johnson.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Computers organization & Assembly Language Chapter 0 INTRODUCTION TO COMPUTING Basic Concepts.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Chapter 6 Multiprocessor System. Introduction  Each processor in a multiprocessor system can be executing a different instruction at any time.  The.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
1 Introduction CEG 4131 Computer Architecture III Miodrag Bolic.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
Computer System Architecture Dept. of Info. Of Computer. Chap. 13 Multiprocessors 13-1 Chap. 13 Multiprocessors n 13-1 Characteristics of Multiprocessors.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
Orange Coast College Business Division Computer Science Department CS 116- Computer Architecture Multiprocessors.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 8 Multiple Processor Systems Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
2016/1/5Part I1 Models of Parallel Processing. 2016/1/5Part I2 Parallel processors come in many different varieties. Thus, we often deal with abstract.
Outline Why this subject? What is High Performance Computing?
Lecture 3: Computer Architectures
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Multiprocessor So far, we have spoken at length microprocessors. We will now study the multiprocessor, how they work, what are the specific problems that.
1 MIMD Computers Based on Textbook – Chapter 4. 2 PMS Notation (pg. 59) (Bell & Newell, 1987) Similar to a block notation, except using single letters.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 May 2, 2006 Session 29.
Spring EE 437 Lillevik 437s06-l22 University of Portland School of Engineering Advanced Computer Architecture Lecture 22 Distributed computer Interconnection.
Multiprocessor  Use large number of processor design for workstation or PC market  Has an efficient medium for communication among the processor memory.
Background Computer System Architectures Computer System Software.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
Intro to Distributed Systems Hank Levy. 23/20/2016 Distributed Systems Nearly all systems today are distributed in some way, e.g.: –they use –they.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Overview Parallel Processing Pipelining
Distributed and Parallel Processing
Multiprocessor Systems
Distributed Processors
Overview Parallel Processing Pipelining
Parallel and Multiprocessor Architectures – Shared Memory
Chapter 17 Parallel Processing
Outline Interconnection networks Processor arrays Multiprocessors
AN INTRODUCTION ON PARALLEL PROCESSING
Presentation transcript:

1 MIMD Computers Module 4

2 PMS Notation (Bell & Newell, 1987) Similar to a block notation, except using single letters Can augment letter with ( ) containing attributes

3 PMS Notation PProcessor (decoding & execution) MMemory (registers, cache, main, secondary) SSwitch (simple or complex) LLink: (often a line, often omitted) TTransducer (I/O device changing representation) KController (generates microsteps for single operations applied externally) DData processing (usually arithmetic, etc.) CComputer ( P, M, others – complete system)

4 MIMD – Multiple data stream Multiple control (instruction) streams Generally considered asynchronous Multiprocessor: single integrated system containing multiple PCs, each capable of executing an independent stream of instructions, but is an integrated system for moving data among PC, memory, I/O MIMD - Definition

5  Can use each PC for a different job – multiprogramming  Our interest: use for one job MIMD – General Usage

6 Granularity/Coupling Course Grain/Loosely Coupled: infrequent data communication separated by long periods of independent computations Fine Grain/Tightly Coupled: frequent data communications, usually in small amounts Grain: determined by program subroutines, basic blocks, stream or machine level

7 Characterized by how data/information from one PC is made available to other PCs Shared memory Message passing  Fixed connection  Distributed memory Types of MIMD

8 Shared memory and message-passing multiprocessors Types of MIMD

9 In distributed memory –> interconnection n.w. Features: Bandwidth: bytes per second Bisection bandwidth (the bps across an interface that partitions the network into two equal groups.) Latency: total time from transmission to reception Concurrency: number of independent connections that can be made (bus has concurrency = 1) Switches

10 Many shared memory PC have some local memory NUMA: non-uniform memory access – some memory locations have larger access time Hybrid Computers – Mixed Type Shared memory multiprocessor with private memories.

11 Clusters: group of shared memory PC’s plus memory “separated” from other clusters; clusters message passing between clusters Hybrid Computers – Mixed Type

12  Interprocessor communication via R/W instructions  Memory: maybe physically distributed (banks), may have different access times, may collide in switch  Memory latency maybe “long”, variable  “Messages” thru switch generally one “word”  Randomization of request maybe used to reduce memory collisions Shared Memory Features

13 Message Passing Features Aka Distributed Memory  Interprocessor communication via send/receive instructions  R/W refer to local memory  Data maybe collected into long messages before sending  Long transmissions may mask latency  Global scheduling maybe used to avoid message collisions

14 Scale: number links/PC - constant Ring Topology Message passing multiprocessors distinguished from one another by their topology

15 Scale: number link/PC – constant (unless?) Mesh Topology - Torus

16 Scale: number link/PC –increases logarithmically Hypercube

17 L – Link (often omitted) Direction: Unidirectional or bidirectional Bandwidth (B): In bytes/sec Latency (R): Start of send to delivery of first byte Single PC to PC or pipelined (each link can be occupied) Specification of Network