Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.2.1.

Slides:



Advertisements
Similar presentations
3.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Process An operating system executes a variety of programs: Batch system.
Advertisements

Processes Management.
Chapter 11 Distributed Processing Co-routines Each co-routine has own stack for activation records Keeps track of point to RESUME Symmetric (peer) relationship.
PZ12A Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ12A - Guarded commands Programming Language Design.
PZ11B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ11B - Parallel execution Programming Language Design.
PZ13A Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ13A - Processor design Programming Language Design.
Processes CSCI 444/544 Operating Systems Fall 2008.
1 Process Description and Control Chapter 3. 2 Process Management—Fundamental task of an OS The OS is responsible for: Allocation of resources to processes.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Advances in Language Design
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-1 Process Concepts Department of Computer Science and Software.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
ITEC 502 컴퓨터 시스템 및 실습 Chapter 2-1: Process Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Computer Studies (AL) Operating System Process Management - Process.
Processes – Part I Processes – Part I. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Review on OSs Upon brief introduction of OSs,
CS212: OPERATING SYSTEM Lecture 2: Process 1. Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
PZ12B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ12B - Synchronization and semaphores Programming Language.
Synchronization and semaphores Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
1 Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Processes and Threads MICROSOFT.  Process  Process Model  Process Creation  Process Termination  Process States  Implementation of Processes  Thread.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Chapter 3: Processes Process Concept.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Assembly Language Co-Routines
ITC250 Copyright 2008© ITC 1 Assessment -4- Choose the correct answer.
1 Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
Operating System Components) These components reflect the services made available by the O.S. Process Management Memory Management I/O Device Management.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
Processes and threads.
Chapter 3: Process Concept
Topic 3 (Textbook - Chapter 3) Processes
Operating System Concepts
Operating Systems (CS 340 D)
Process Management Presented By Aditya Gupta Assistant Professor
Chapter 3: Process Concept
Chapter 3: Processes.
Chapter 3: Processes.
Lecture 2: Processes Part 1
CS 143A Quiz 1 Solution.
Process & its States Lecture 5.
Chapter 3: Processes.
Operating System Concepts
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Threads Chapter 4.
Chapter 3: Processes.
Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
CSE 451: Operating Systems Winter 2003 Lecture 4 Processes
CS510 Operating System Foundations
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Chapter 3: Process Concept
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
Synchronization and semaphores
Chapter 3: Process Management
Presentation transcript:

Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section

2 Parallel programming principles Variable definitions. Variables may be either mutable or definitional. Mutable variables are the common variables declared in most sequential languages. Values may be assigned to the variables and changed during program execution. A definitional variable may be assigned a value only once. Parallel composition. We need to add the parallel statement, which causes additional threads of control to begin executing. Program structure. They may be transformational to transform the input data into an appropriate output value. Or it may be reactive, where the program reacts to external stimuli called events. Communication. Parallel programs must communicate with one another. Such communication will typically be via shared memory with common data objects accessed by each parallel program or via messages. Synchronization. Parallel programs must be able to order the execution of its various threads of control.

3 Impact of slow memories Historically - CPU fast Disk, printer, tape - slow What to do while waiting for I/O device? - Run another program: Even today, although machines and memory are much faster, there is still a 10 5 or more to 1 time difference between the speed of the CPU and the speed for accessing information from disk. For example,  Instruction time: 50 nanosecond  Disk access: 10 milliseconds = 10,000,000 nanoseconds

4 Multiprogramming Now: Multiple processors Networks of machines Multiple tasks simultaneously Problems: 1. How to switch among parts effectively? 2. How to pass information between 2 segments? Content switching of environments permitting concurrent execution of separate programs.

5 Parallel constructs Two approaches (of many): 1. AND statement (programming language level) 2. fork function (UNIX) (operating system level) and: Syntax: statement1 and statement2 and statement3 Semantics: All statements execute in parallel. Execution goes to statement following and after all parallel parts terminate. S1; S1 and S2 and S3; S4  S4 after S1, S2, and S3 terminate  Implementation: Cactus stack

6 Parallel storage management Use multiple stacks. Can use one heap (c)

7 “and” statement execution After L1, add S1, S2, S3 all onto stack. Each stack is independent. How to implement?  Heap storage is one way for each activation record. 2. fork() function: { S1; fork(); if I am parent process do { main task; sleep until child process terminates if I am child process do { exec new process S2  S2 executes when both parent and child process terminate above action Both parent process and child process execute independently

8 Tasks A task differs little from the definition of an ordinary subprogram  independent execution (thread of control)  requires task synchronization and communication with other tasks - will look at communication later (semaphores)  has separate address space for its own activation record

9 Ada tasks task Name is - Declarations for synchronization and communication end; task body Name is - Usual local declarations as found in any subprogram begin --Sequence of statements end; Syntax same as Ada packages Initiating a task: task type Terminal is -- Rest of definition in the same form as above end; Creating task data: A: Terminal; B, C: Terminal; “Allocating” task objects creates their execution.

10 Coroutines Normal procedure activation works as Last-in First-out (LIFO) execution. Different from parallel execution - single thread of control Call procedure Do action Exit procedure Consider following example: Input process reads from 3 different files Output process writes to 4 different files Input process Output process

11 Execution of each process Read processWrite process while true do begin read(A,I)resume input(I) resume output(I)write(W,I) read(B,I)resume input(I) resume output(I)write(X,I) read(C,I)resume input(I) resume output(I)write(Y,I) endresume output(I) write(Z,I) end If each process views the other as a subroutine, we call both of these processes coroutines.

12 Implementation of coroutines - Instructions Resume output Initial execution Second execution

13 Coroutine data storage Build both activation records together (much like variant records) For resume statement: Pick up a return address of coroutine in activation record and save current address as new return point in activation record read process resume address write process resume address Activation record for input Activation record for output

Guarded commands Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section

15 Nondeterministic Execution Program execution is usually a sequential, deterministic process:S1, S2, S3,... Problem: Find a path from point A to point B:

16 Usual deterministic algorithms Algorithm 1: Move right to correct column, Move up to correct row. Algorithm 2: Move up to correct row, Move right to correct column. You have no other way to think about problem, no other constructs to help. But there is a another nondeterministic approach: Move right or up until correct row and column, Then move straight to B. Idea came from Dijkstra in Use guard (  and  ) on a statement: P  S: means S is executable if P is true.

17 Guarded IF statement if p1  s1  p2  s2  p3  s3 ... fi Semantics: 1. Some pi must be true. 2. Choose any pi that is true and execute si. 3. If all pi are false, program halts and fails. Note that if p then s1 else s2 is just: if p  s1  not(p)  s2 fi

18 Guarded repetition do p1  s1  p2  s2  p3  s3 ... od Semantics: 1. If all pi false, go to next statement. 2. Choose any pi that is true and execute si. 3. repeat execution of guarded do statement. Random walk algorithm: do current_row not B row  move up one row  current_column not B column  move right one column od Solution must work, yet you cannot a priori know the exact sequence of paths the program will produce.

19 Guarded commands in Ada Select -- select statement when condition 1 => statement 1 or when condition 2 => statement 2... or when condition n => statement n else statement n+1 The use of this will become apparent when we discuss synchronization and Ada rendezvous.