CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
The University of Adelaide, School of Computer Science
Computer Organization and Architecture
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.

Chapter 17 Parallel Processing.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
1 Pertemuan 25 Parallel Processing 1 Matakuliah: H0344/Organisasi dan Arsitektur Komputer Tahun: 2005 Versi: 1/1.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Introduction to Parallel Processing Ch. 12, Pg
Chapter 18 Parallel Processing (Multiprocessing).
Flynn’s Taxonomy of Computer Architectures Source: Wikipedia Michael Flynn 1966 CMPS 5433 – Parallel Processing.
Advanced Computer Architectures
Advanced Architectures
Computer System Architectures Computer System Software
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Fall 2000M.B. Ibáñez Lecture 01 Introduction What is an Operating System? The Evolution of Operating Systems Course Outline.
Flynn’s Taxonomy SISD: Although instruction execution may be pipelined, computers in this category can decode only a single instruction in unit time SIMD:
Basic logical operations Operation Mechanism Through the combination of circuits that perform these three operations, a wide range of logical circuits.
Operating Systems Lecture 02: Computer System Overview Anda Iamnitchi
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership: process includes a virtual address space to hold the process image (fig 3.16)
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Outline Why this subject? What is High Performance Computing?
Computer Architecture And Organization UNIT-II Flynn’s Classification Of Computer Architectures.
Lecture 3: Computer Architectures
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
CPIT Program Execution. Today, general-purpose computers use a set of instructions called a program to process data. A computer executes the.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Advanced Architectures
Computer Organization and Architecture
18-447: Computer Architecture Lecture 30B: Multiprocessors
CMSC 611: Advanced Computer Architecture
Distributed Processors
Parallel Processing - introduction
The University of Adelaide, School of Computer Science
CS 147 – Parallel Processing
Flynn’s Classification Of Computer Architectures
The University of Adelaide, School of Computer Science
Threads, SMP, and Microkernels
Different Architectures
Chapter 17 Parallel Processing
Symmetric Multiprocessing (SMP)
Outline Interconnection networks Processor arrays Multiprocessors
AN INTRODUCTION ON PARALLEL PROCESSING
Lecture 4- Threads, SMP, and Microkernels
Part 2: Parallel Models (I)
Chapter 4 Multiprocessors
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Lecture 17 Multiprocessors and Thread-Level Parallelism
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Presentation transcript:

CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali

Contents Multiple Processor Organizations Symmetric Multiprocessors © Prepared By: Razif Razali

Introduction Introduction Traditionally, the computer has been viewed as a sequential machine Most computer programming languages require the programmer to specify algorithms as sequences of instructions Processors executes programs by executing machine instructions in a sequence and one at a time Each instructions is executed in a sequence of operations (fetch instruction, fetch operation, perform operation, store results) A traditional way to increase system performance is to use multiple processors that can execute in parallel to support a given workload. The two most common multiple-processor organizations are Symmetric multiprocessors (SMPs) and clusters More recently, nonuniform memory access (NUMA) have been introduced commercially © Prepared By: Razif Razali

Multiple Processor Organization. Types of Parallel Processor Systems are : Single instruction, single data stream - SISD Single instruction, multiple data stream - SIMD Multiple instruction, single data stream - MISD Multiple instruction, multiple data stream- MIMD © Prepared By: Razif Razali

Single instruction, single data stream - SISD Single processor executes a single instruction stream to operate on data stored in single memory. Uni-processor fall into this category Single instruction, single data stream - SISD © Prepared By: Razif Razali

Single instruction, multiple data stream – SIMD A single machine instruction controls simultaneous execution of a number of processing elements on a lockstep basis. Each processing element has associated data memory, so that each instruction is executed on different set of data by different processors. Example : Vector and array processors Single instruction, multiple data stream - SIMD

MISD & MIMD Multiple instruction, single data stream - MISD Sequence of data Transmitted to set of processors Each processor executes different instruction sequence Never been implemented   Multiple instructions, multiple data stream- MIMD Set of processors Simultaneously execute different instruction sequences Different sets of data SMPs, clusters and NUMA systems MIMD is divided into two : Shared Memory (tightly coupled) : Distributed Memory (loosely coupled) © Prepared By: Razif Razali

Multiple instruction, multiple data stream - MIMD shared memory © Prepared By: Razif Razali

Multiple instruction, multiple data stream- MIMD distributed memory © Prepared By: Razif Razali

Taxonomy of Parallel Processor Architectures © Prepared By: Razif Razali

Symmetric Multiprocessor A stand alone computer with the following characteristics : There are two or more similar processors of comparable capability These processors share the same main memory and I/O facilities and are interconnected by a bus or other internal connection scheme, such that memory access time is approximately the same for each processor All processors share access to I/O devices Either through same channels or different channels that provide paths to same devices All processors can perform the same functions (hence the term symmetric) The system is controlled by an integrated operating system that providing interaction between processors and their programs at the job, task, file and data element levels © Prepared By: Razif Razali

Symmetric Multiprocessor Advantages Performance If some work can be done in parallel, then a system with multiple processors will yield greater performance than one with a single processor of the same type Availability Since all processors can perform the same functions, failure of a single processor does not stop the system Incremental growth User can enhance performance by adding additional processors Scaling Vendors can offer range of products based on number of processors © Prepared By: Razif Razali

Block Diagram of Tightly Coupled Multiprocessor © Prepared By: Razif Razali

Symmetric Multiprocessor CONCLUSIONS Types of Parallel Processor Systems are : Single instruction, single data stream - SISD Single instruction, multiple data stream - SIMD Multiple instruction, single data stream - MISD Multiple instruction, multiple data stream- MIMD Symmetric Multiprocessor © Prepared By: Razif Razali

END OF THESE SUBJECT © Prepared By: Razif Razali