Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.2.1.

Slides:



Advertisements
Similar presentations
Processes Management.
Advertisements

PZ12A Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ12A - Guarded commands Programming Language Design.
PZ11B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ11B - Parallel execution Programming Language Design.
PZ13A Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ13A - Processor design Programming Language Design.
Processes CSCI 444/544 Operating Systems Fall 2008.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Advances in Language Design
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-1 Process Concepts Department of Computer Science and Software.
ITEC 502 컴퓨터 시스템 및 실습 Chapter 2-1: Process Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Computer Studies (AL) Operating System Process Management - Process.
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
PZ12B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ12B - Synchronization and semaphores Programming Language.
Synchronization and semaphores Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
1 Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Assembly Language Co-Routines
1 Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
Operating System Components) These components reflect the services made available by the O.S. Process Management Memory Management I/O Device Management.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
Processes and threads.
Chapter 3: Processes.
Chapter 3: Process Concept
Topic 3 (Textbook - Chapter 3) Processes
Operating System Concepts
Operating Systems (CS 340 D)
Process Management Presented By Aditya Gupta Assistant Professor
Processes Overview: Process Concept Process Scheduling
Chapter 3: Process Concept
Chapter 3: Processes Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Chapter 3: Processes.
Operating Systems (CS 340 D)
Applied Operating System Concepts
Chapter 3: Processes.
Chapter 3 Process Management.
CGS 3763 Operating Systems Concepts Spring 2013
Chapter 4: Processes Process Concept Process Scheduling
Lecture 2: Processes Part 1
CS 143A Quiz 1 Solution.
Process & its States Lecture 5.
Chapter 3: Processes.
Operating System Concepts
Process Description and Control
Threads Chapter 4.
Subject : T0152 – Programming Language Concept
Lecture 6: Multiprogramming and Context Switching
Chapter 3: Processes.
Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
CSE 451: Operating Systems Winter 2003 Lecture 4 Processes
Operating Systems: A Modern Perspective, Chapter 6
CS510 Operating System Foundations
CSE 451: Operating Systems Autumn 2004 Module 4 Processes
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Chapter 3: Process Concept
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
Synchronization and semaphores
Chapter 3: Process Management
Presentation transcript:

Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.2.1

Parallel programming principles Variable definitions. Variables may be either mutable or definitional. Mutable variables are the common variables declared in most sequential languages. Values may be assigned to the variables and changed during program execution. A definitional variable may be assigned a value only once. Parallel composition. We need to add the parallel statement, which causes additional threads of control to begin executing. Program structure. They may be transformational to transform the input data into an appropriate output value. Or it may be reactive, where the program reacts to external stimuli called events. Communication. Parallel programs must communicate with one another. Such communication will typically be via shared memory with common data objects accessed by each parallel program or via messages. Synchronization. Parallel programs must be able to order the execution of its various threads of control.

Impact of slow memories Historically - CPU fast Disk, printer, tape - slow What to do while waiting for I/O device? - Run another program: Even today, although machines and memory are much faster, there is still a 105 or more to 1 time difference between the speed of the CPU and the speed for accessing information from disk. For example, Instruction time: 50 nanosecond Disk access: 10 milliseconds = 10,000,000 nanoseconds

Multiprogramming Now: Multiple processors Networks of machines Multiple tasks simultaneously Problems: 1. How to switch among parts effectively? 2. How to pass information between 2 segments? Content switching of environments permitting concurrent execution of separate programs.

Parallel constructs Two approaches (of many): 1. AND statement (programming language level) 2. fork function (UNIX) (operating system level) and: Syntax: statement1 and statement2 and statement3 Semantics: All statements execute in parallel. Execution goes to statement following and after all parallel parts terminate. S1; S1 and S2 and S3; S4  S4 after S1, S2, and S3 terminate  Implementation: Cactus stack

Parallel storage management Use multiple stacks. Can use one heap (c)

“and” statement execution After L1, add S1, S2, S3 all onto stack. Each stack is independent. How to implement?  Heap storage is one way for each activation record. 2. fork() function: { S1; fork(); if I am parent process do { main task; sleep until child process terminates if I am child process do { exec new process S2  S2 executes when both parent and child process terminate above action Both parent process and child process execute independently

Tasks A task differs little from the definition of an ordinary subprogram independent execution (thread of control) requires task synchronization and communication with other tasks - will look at communication later (semaphores) has separate address space for its own activation record

Ada tasks task Name is - Declarations for synchronization and communication end; task body Name is - Usual local declarations as found in any subprogram begin --Sequence of statements Syntax same as Ada packages Initiating a task: task type Terminal is -- Rest of definition in the same form as above Creating task data: A: Terminal; B, C: Terminal; “Allocating” task objects creates their execution.

Coroutines Normal procedure activation works as Last-in First-out (LIFO) execution. Different from parallel execution - single thread of control Call procedure Do action Exit procedure Consider following example: Input process reads from 3 different files Output process writes to 4 different files Input process Output process

Execution of each process Read process Write process while true do while true do begin begin read(A,I) resume input(I) resume output(I) write(W,I) read(B,I) resume input(I) resume output(I) write(X,I) read(C,I) resume input(I) resume output(I) write(Y,I) end resume output(I) write(Z,I) end If each process views the other as a subroutine, we call both of these processes coroutines.

Implementation of coroutines - Instructions Resume output Resume output Resume output Resume output Resume output Resume output Resume output Initial execution Second execution

Coroutine data storage Build both activation records together (much like variant records) For resume statement: Pick up a return address of coroutine in activation record and save current address as new return point in activation record Activation record for input read process resume address Activation record for output write process resume address

Guarded commands Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.2.2

Nondeterministic Execution Program execution is usually a sequential, deterministic process:S1, S2, S3, ... Problem: Find a path from point A to point B:

Usual deterministic algorithms Algorithm 1: Move right to correct column, Move up to correct row. Algorithm 2: Move up to correct row, Move right to correct column. You have no other way to think about problem, no other constructs to help. But there is a another nondeterministic approach: Move right or up until correct row and column, Then move straight to B. Idea came from Dijkstra in 1972. Use guard ( and  ) on a statement: P  S: means S is executable if P is true.

Guarded IF statement if p1  s1  p2  s2  p3  s3  . . . fi Semantics: 1. Some pi must be true. 2. Choose any pi that is true and execute si. 3. If all pi are false, program halts and fails. Note that if p then s1 else s2 is just: if p  s1  not(p)  s2 fi

Guarded repetition do p1  s1  p2  s2  p3  s3  . . . od Semantics: 1. If all pi false, go to next statement. 2. Choose any pi that is true and execute si. 3. repeat execution of guarded do statement. Random walk algorithm: do current_row not B row  move up one row  current_column not B column  move right one column od Solution must work, yet you cannot a priori know the exact sequence of paths the program will produce.

Guarded commands in Ada Select -- select statement when condition1 => statement1 or when condition2 => statement2 ... or when conditionn => statementn else statementn+1 The use of this will become apparent when we discuss synchronization and Ada rendezvous.