Computer Architecture Parallel Processing

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

2. Processes and Interactions 2.1 The Process Notion 2.2 Defining and Instantiating Processes –Precedence Relations –Implicit Process Creation –Dynamic.
Computer Architecture Introduction to MIMD architectures Ola Flygt Växjö University
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Computer Systems/Operating Systems - Class 8
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Dec 5, 2005 Topic: Intro to Multiprocessors and Thread-Level Parallelism.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
Operating Systems (OS) Threads, SMP, and Microkernel, Unix Kernel
3.5 Interprocess Communication
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
1 Chapter 4 Threads Threads: Resource ownership and execution.
Threads, SMP, and Microkernels
A. Frank - P. Weisberg Operating Systems Introduction to Tasks/Threads.
CS 470/570:Introduction to Parallel and Distributed Computing.
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Threads, SMP, and Microkernels Chapter 4. 2 Outline n Threads n Symmetric Multiprocessing (SMP) n Microkernel n Linux Threads.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Operating System 4 THREADS, SMP AND MICROKERNELS
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
1 Threads, SMP, and Microkernels Chapter 4. 2 Focus and Subtopics Focus: More advanced concepts related to process management : Resource ownership vs.
Introduction to Parallel Processing Chapter No.3.
Department of Computer Science University of the West Indies.
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Threads G.Anuradha (Reference : William Stallings)
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership: process includes a virtual address space to hold the process image (fig 3.16)
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
Multithreading in Java Sameer Singh Chauhan Lecturer, I. T. Dept., SVIT, Vasad.
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
Operating System 4 THREADS, SMP AND MICROKERNELS.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Lecture 3: Computer Architectures
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Parallel Computing Presented by Justin Reschke
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership - process includes a virtual address space to hold the process image Scheduling/execution-
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Processor Level Parallelism 1
1.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 1: Introduction What Operating Systems Do √ Computer-System Organization.
buses, crossing switch, multistage network.
Computer Engg, IIT(BHU)
Chapter 4 Threads.
Flynn’s Classification Of Computer Architectures
Threads, SMP, and Microkernels
Chapter 15, Exploring the Digital Domain
Symmetric Multiprocessing (SMP)
buses, crossing switch, multistage network.
Lecture 4- Threads, SMP, and Microkernels
Operating System 4 THREADS, SMP AND MICROKERNELS
Threads and Concurrency
Threads Chapter 4.
Multithreaded Programming
COMPUTER ARCHITECTURES FOR PARALLEL ROCESSING
Operating System Overview
Sarah Diesburg Operating Systems CS 3430
Presentation transcript:

Computer Architecture Parallel Processing Ola Flygt Växjö University http://w3.msi.vxu.se/users/ofl/ Ola.Flygt@msi.vxu.se +46 470 70 86 49

Outline Basic concepts Types and levels of parallelism Classification of parallel architecture Basic parallel techniques Relationships between languages and parallel architecture CH03

Basic concepts The concept of program ordered set of instructions (programmer’s view) executable file (operating system’s view)

The concept of process OS view, process relates to execution Process creation setting up the process description allocating an address space loading the program into the allocated address space, and passing the process description to the scheduler process states ready to run running wait

Process spawning (independent processes) B D E

The concept of thread smaller chunks of code (lightweight) threads are created within and belong to process for parallel thread processing, scheduling is performed on a per-thread basis finer-grain, less overhead on switching from thread to thread

Single-thread process or multi-thread (dependent) Thread tree Process Threads

Three basic methods for creating and terminating threads 1. unsynchronized creation and unsynchronized termination calling library functions: CREATE_THREAD, START_THREAD 2. unsynchronized creation and synchronized termination FORK and JOIN 3. synchronized creation and synchronized termination COBEGIN and COEND

Processes and threads in languages Black box view: T: thread T2 T0 T1 T1 T2 T0 . . . Tn FORK COBEGIN . . . FORK JOIN COEND JOIN

The concepts of concurrent execution (N-client 1-server) Non pre-emptive Pre-emptive Time-shared Prioritized Priority Sever Sever Sever Client Client Client

Parallel execution N-client N-server model Synchronous or Asynchronous Sever

Concurrent and Parallel Programming Languages Classification of programming languages

Types and levels of parallelism Available and utilized parallelism available: in program or in the problem solutions utilized: during execution Types of available parallelism functional arises from the logic of a problem solution data arises from data structures

Available and utilized levels of functional parallelism Available levels Utilized levels User (program) level User level 2 Process level Procedure level 1 Loop level Thread level Instruction level Instruction level 1.Exploited by architectures 2.Exploited by means of operating systems

Utilization of functional parallelism Available parallelism can be utilized by architecture, instruction-level parallel architectures compilers parallel optimizing compiler operating system multitasking

Concurrent execution models User level --- Multiprogramming, time sharing Process level --- Multitasking Thread level --- Multi-threading

Utilization of data parallelism By using data-parallel architecture Convert into functional parallelism (i.e. loop constructs)

Classification of parallel architectures Flynn’s classification SISD SIMD MISD (Multiple Instruction Single Date) MIMD

Parallel architectures Data-parallel architectures Function-parallel architectures Instruction-level PAs Thread-level PAs Process-level PAs DPs ILPS MIMDs Pipelined Superscalar Vector Associative SIMDs Systolic VLIWs Distributed Shared and neural architecture processors processors memory memory architecture architecture MIMD (multi- (multi-computer) processors) Part III Part II Part IV

Basic parallel technique Pipelining (time) a number of functional units are employed in sequence to perform a single computation a number of steps for each computation Replication (space) a number of functional units perform multiply computation simultaneously more processors more memory more I/O more computers

Relation between basic techniques and architectures

Relationships between languages and parallel architecture SPMD (Single Procedure Multiple data) Loop: split into N threads that works on different invocations of the same loop threads can execute the same code at different speeds synchronize the parallel threads at the end of the loop barrier synchronization use MIMD Data-parallel languages DAP Fortran C = A + B (A, B and C are arrays) use SIMD Other types like Vector machines or Systolic arrays does not require any specific language support.

Synchronization mechanisms Test_and_set Send/receive message Net_send Semaphore Broadcast Shift net_receive (processor form) Conditional Critical region Monitor Remote procedure calls Rendezvous

Using Semaphores to handle mutual execution Busy Wait S Semaphore P(S) P(S) Critical region Critical region Shared data structure V(S) V(S)

Parallel distributed computing Ada used rendezvous concepts which combines feature of RPC and monitors PVM (Parallel Virtual Machine) to support workstation clusters MPI (Message-Passing Interface) programming interface for parallel computers

Summary of forms of parallelism