Flynn’s Architecture. SISD (single instruction and single data stream) SIMD (single instruction and multiple data streams) MISD (Multiple instructions.

Slides:



Advertisements
Similar presentations
© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
Advertisements

Vectors, SIMD Extensions and GPUs COMP 4611 Tutorial 11 Nov. 26,
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Streaming SIMD Extension (SSE)
1 (Review of Prerequisite Material). Processes are an abstraction of the operation of computers. So, to understand operating systems, one must have a.
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
The Microprocessor and its Architecture
Parallel computer architecture classification
Fundamental of Computer Architecture By Panyayot Chaikan November 01, 2003.
SISD—Single Instruction Single Data Xin Meng Tufts University School of Engineering.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.

Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
PSU CS 106 Computing Fundamentals II Introduction HM 1/3/2009.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Introduction to Parallel Processing Ch. 12, Pg
Flynn’s Taxonomy of Computer Architectures Source: Wikipedia Michael Flynn 1966 CMPS 5433 – Parallel Processing.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Parallel Computing Basic Concepts Computational Models Synchronous vs. Asynchronous The Flynn Taxonomy Shared versus Distributed Memory Interconnection.
1 Catalog of useful (structural) modules and architectures In this course we will be working mostly at the BEHAVIORAL and STRUCTURAL levels. We will rely.
HPC Technology Track: Foundations of Computational Science Lecture 2 Dr. Greg Wettstein, Ph.D. Research Support Group Leader Division of Information Technology.
CS668- Lecture 2 - Sept. 30 Today’s topics Parallel Architectures (Chapter 2) Memory Hierarchy Busses and Switched Networks Interconnection Network Topologies.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
What is a Distributed System? n From various textbooks: l “A distributed system is a collection of independent computers that appear to the users of the.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Computer Science 210 Computer Organization The von Neumann Architecture.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
An Overview of Parallel Computing. Hardware There are many varieties of parallel computing hardware and many different architectures The original classification.
Ch. 2 Data Manipulation 4 The central processing unit. 4 The stored-program concept. 4 Program execution. 4 Other architectures. 4 Arithmetic/logic instructions.
Computer Architecture And Organization UNIT-II General System Architecture.
Chapter 2 Data Manipulation © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 2 Data Manipulation. © 2005 Pearson Addison-Wesley. All rights reserved 2-2 Chapter 2: Data Manipulation 2.1 Computer Architecture 2.2 Machine.
Introduction to MMX, XMM, SSE and SSE2 Technology
Parallel Computing.
1 Basic Components of a Parallel (or Serial) Computer CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM.
Outline Why this subject? What is High Performance Computing?
Computer performance issues* Pipelines, Parallelism. Process and Threads.
Computer Architecture And Organization UNIT-II Flynn’s Classification Of Computer Architectures.
Chapter 2 Data Manipulation © 2007 Pearson Addison-Wesley. All rights reserved.
Lecture 3: Computer Architectures
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Classification of parallel computers Limitations of parallel processing.
Processor Level Parallelism 1
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
Catalog of useful (structural) modules and architectures
buses, crossing switch, multistage network.
Parallel Processing - introduction
CS 147 – Parallel Processing
Flynn’s Classification Of Computer Architectures
Chapter 17 Parallel Processing
Symmetric Multiprocessing (SMP)
Parallel Architectures
buses, crossing switch, multistage network.
AN INTRODUCTION ON PARALLEL PROCESSING
Part 2: Parallel Models (I)
Objectives Describe common CPU components and their function: ALU Arithmetic Logic Unit), CU (Control Unit), Cache Explain the function of the CPU as.
Multicore and GPU Programming
Computer Architecture
Presentation transcript:

Flynn’s Architecture

SISD (single instruction and single data stream) SIMD (single instruction and multiple data streams) MISD (Multiple instructions and single data stream) MIMD (Multiple instructions and multiple data streams)

SISD Single Instruction, Single Data (SISD) refers to an Instruction Set Architecture in which a single processor (one CPU) executes exactly one instruction stream at a time and also fetches or stores one item of data at a time to operate on data stored in a single memory unit.

SISD Most of the CPU design, based on the von Neumann architecture, from the beginning till recent times are based on the SISD. The SISD model is a typical non-pipelined architecture with the general-purpose registers, as well as dedicated special registers such as the Program Counter (PC), the Instruction Register (IR), Memory Address Registers (MAR) and Memory Data Registers (MDR).

SISD No instruction parallelism No data parallelism SISD processing architecture example─ a personal computer processing instructions and data on single processor

SISD

SIMD Single Instruction, Multiple Data (SIMD) is an Instruction Set Architecture that have a single control unit (CU) and more than one processing unit (PU) that operates like a von Neumann machine by executing a single instruction stream over PUs, handled through the CU. The CU generates the control signals for all of the PUs and by which executes the same operation on different data streams.

SIMD The SIMD architecture, in effect, is capable of achieving data level parallelism just like with vector processor. Some of the examples of the SIMD based systems include IBM's AltiVec and SPE for PowerPC, HP's PA-RISC Multimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt, SSE, SSE2, SSE3 and SSSE3, AMD's 3DNow! etc.

SIMD Multiple data streams in parallel with a single instruction stream SIMD processing architecture example─ agraphic processor processing instructions for translation or rotation or other operations are done on multiple data An array or matrix is also processed in SIMD

SIMD

MISD Multiple Instruction, Single Data (MISD) is an Instruction Set Architecture for parallel computing where many functional units perform different operations by executing different instructions on the same data set. This type of architecture is common mainly in the fault-tolerant computers executing the same instructions redundantly in order to detect and mask errors.

MISD Multiple instruction streams in parallel operating on single instruction stream Processing architecture example─ processing for critical controls of missiles where single data stream processed on different processors to handle faults if any during processing

MISD

MIMD Multiple Instruction stream, Multiple Data stream (MIMD) is an Instruction Set Architecture for parallel computing that is typical of the computers with multiprocessors. Using the MIMD, each processor in a multiprocessor system can execute asynchronously different set of the instructions independently on the different set of data units.

MIMD The MIMD based computer systems can used the shared memory in a memory pool or work using distributed memory across heterogeneous network computers in a distributed environment. The MIMD architectures is primarily used in a number of application areas such as computer- aided design/computer-aided manufacturing, simulation, modeling, communication switches etc Multiple processing streams in parallel processiong on parallel data streams

MIMD MIMD processing architecture example is super computer or distributed computing systems with distributed or single shared memory.

MIMD

The MOMS taxonomy of parallel machines The SIMD/MIMD taxonomy leaves something to be desired, since there are many subclasses of MIMD that do not appear in the model, and one class (MISD) that appears in the model but not in real life. Gustafson (1990) proposes the following taxonomy.

The MOMS taxonomy of parallel machines

MOMS (Monolithic operations, monolithic storage) “Apple-pie computing.”

The MOMS taxonomy of parallel machines MODS (Monolithic operations, distributed data). Examples: Connection Machine (CM-1, CM-2), MassPar.

The MOMS taxonomy of parallel machines DOMS (Distributed operations, monolithic storage) Examples: Sequent Balance and Symmetry, BBN Butterfly, Cray Y-MP.

The MOMS taxonomy of parallel machines DODS (Distributed operations, distributed storage) Examples: N-CUBE, Intel iPSC, Meiko Computing Surface.