Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

Threads, SMP, and Microkernels
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Introduction to MIMD architectures
Computer Systems/Operating Systems - Class 8
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Operating Systems (OS) Threads, SMP, and Microkernel, Unix Kernel
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
3.5 Interprocess Communication
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
1 Chapter 4 Threads Threads: Resource ownership and execution.
Threads, SMP, and Microkernels
A. Frank - P. Weisberg Operating Systems Introduction to Tasks/Threads.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
CS 470/570:Introduction to Parallel and Distributed Computing.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Computer Architecture Parallel Processing
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Threads, SMP, and Microkernels Chapter 4. 2 Outline n Threads n Symmetric Multiprocessing (SMP) n Microkernel n Linux Threads.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Operating System 4 THREADS, SMP AND MICROKERNELS
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
1 Threads, SMP, and Microkernels Chapter 4. 2 Focus and Subtopics Focus: More advanced concepts related to process management : Resource ownership vs.
Introduction to Parallel Processing Chapter No.3.
Department of Computer Science University of the West Indies.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Copyright © George Coulouris, Jean Dollimore, Tim Kindberg This material is made available for private study and for direct.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 13 Threads Read Ch 5.1.
1 Threads Chapter 11 from the book: Inter-process Communications in Linux: The Nooks & Crannies by John Shapley Gray Publisher: Prentice Hall Pub Date:
Threads G.Anuradha (Reference : William Stallings)
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership: process includes a virtual address space to hold the process image (fig 3.16)
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Operating System 4 THREADS, SMP AND MICROKERNELS.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Thread By Group III Kathryn Bean and Wafa’ Jaffal.
Operating Systems: Internals and Design Principles
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership - process includes a virtual address space to hold the process image Scheduling/execution-
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Chapter 4 – Thread Concepts
Chapter 3: Process Concept
Sarah Diesburg Operating Systems COP 4610
buses, crossing switch, multistage network.
Chapter 4 – Thread Concepts
Threads, SMP, and Microkernels
Chapter 15, Exploring the Digital Domain
Symmetric Multiprocessing (SMP)
buses, crossing switch, multistage network.
Lecture 4- Threads, SMP, and Microkernels
Operating System 4 THREADS, SMP AND MICROKERNELS
Threads and Concurrency
Threads Chapter 4.
Operating System Overview
Sarah Diesburg Operating Systems CS 3430
Presentation transcript:

Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel techniques 3.5 Relationships between languages and parallel architecture CH03 TECH Computer Science

3.1 Basic concepts The concept of program  ordered set of instructions (programmer’s view)  executable file (operating system’s view)

The concept of process OS view, process relates to execution Process creation  setting up the process description  allocating an address space  loading the program into the allocated address space, and  passing the process description to the scheduler process states  ready to run  running  wait

Process spawning (independent processes) Figure 3.3 Process spawning. A C B DE

3.1.3 The concept of thread smaller chunks of code (lightweight) threads are created within and belong to process for parallel thread processing, scheduling is performed on a per-thread basis finer-grain, less overhead on switching from thread to thread

Single-thread process or multi-thread (dependent)  Figure 3.5 Thread tree Process Threads

Three basic methods for creating and terminating threads 1. unsynchronized creation and unsynchronized termination  calling library functions: CREATE_THREAD, START_THREAD 2. unsynchronized creation and synchronized termination  FORK and JOIN 3. synchronized creation and synchronized termination  COBEGIN and COEND

3.1.4 Processes and threads in languages Black box view: T: thread T2 T0 T1 T1 T2 T0... Tn FORK JOIN FORK JOIN (a) COBEGIN COEND... (b)

3.1.5 The concepts of concurrent execution (N-client 1-server) Non pre-emptive Pre-emptive Time-shared Priotized Priority Sever Client

Parallel execution N-client N-server model Synchronous or Asynchronous Sever Client

Concurrent and parallel programming languages Classification Table 3.1Classification of programming languages.

3.2 Types and levels of parallelism Available and utilized parallelism  available: in program or in the problem solutions  utilized: during execution Types of available parallelism  functional  arises from the logic of a problem solution  data  arises from data structures

Figure 3.11 Available and utilized levels of functional parallelism Available levelsUtilized levels User (program) level User level Procedure level Process level Loop levelThread level Instruction level Exploited by architectures 2.Exploited by means of operating systems

3.2.4 Utilization of functional parallelism Available parallelism can be utilized by  architecture,  instruction-level parallel architectures  compilers  parallel optimizing compiler  operating system  multitasking

3.2.5 Concurrent execution models User level --- Multiprogramming, time sharing Process level --- Multitasking Thread level --- Multi-threading

3.2.6 Utilization of data parallelism by using data-parallel architecture

3.3 Classification of parallel architectures Flynn’s classification  SISD  SIMD  MISD (Multiple Instruction Single Date)  MIMD

Parallel architectures PAs Part III Part II Part IV Data-parallel architectures Function-parallel architectures Instruction-level PAs Thread-level PAs Process-level PAs ILPSMIMDs Vector architecture Associative architecture and neural SIMDs Systolic Pipelined processors Processors) processors VLIWs SuperscalarDitributed memory (multi-computer) Shared memory (multi-MIMD DPs Parallel architectures //

3.4 Basic parallel technique Pipelining (time)  a number of functional units are employed in sequence to perform a single computation  a number of steps for each computation Replication (space)  a number of functional units perform multiply computation simultaneously  more processors  more memory  more I/O  more computers

3.5 Relationships between languages and parallel architecture SPMD (Single Procedure Multiple data)  Loop: split into N threads that works on different invocations of the same loop  threads can execute the same code at different speeds  synchronize the parallel threads at the end of the loop  barrier synchronization  use MIMD Data-parallel languages  DAP Fortran  C = A + B  use SIMD

Synchronization mechanisms Figure 3.17 Progress of language constructs used for synchronization Test_and_set Semaphore Conditional Critical region Monitor Rendezvous Send/receive message BroadcastShift net_receive (processor form) Remote procedure calls Net_send

Using binary Semaphore p1 or queue semaphore p2 P1 Critical region V(S) SSemaphore Shared data structure V(S) P(S) P2 Critical region P(S) Figure 3.18 Using a semaphore to solve the mutual execution problem Busy Wait

Parallel distributed computing Ada  used rendezvous concepts which combines feature of RPC and monitors PVM (Parallel Virtual Machine)  to support workstation clusters MPI (Message-Passing Interface)  programming interface for parallel computers COBRA ? Windows NT ?

Summary of forms of parallelism See Table 3.3