2. Processes and Interactions

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
2. Processes and Interactions 2.1 The Process Notion 2.2 Defining and Instantiating Processes –Precedence Relations –Implicit Process Creation –Dynamic.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
 Read about Therac-25 at  [  [
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
8a-1 Programming with Shared Memory Threads Accessing shared data Critical sections ITCS4145/5145, Parallel Programming B. Wilkinson Jan 4, 2013 slides8a.ppt.
Chapter 3 The Critical Section Problem
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
ICS Processes and Interactions 2.1 The Process Notion 2.2 Defining and Instantiating Processes –Precedence Relations –Implicit Process Creation.
CSCI-455/552 Introduction to High Performance Computing Lecture 19.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Concurrency Control 1 Fall 2014 CS7020: Game Design and Development.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Agenda  Quick Review  Finish Introduction  Java Threads.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 5: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Designing Parallel Algorithms (Synchronization)
Threading And Parallel Programming Constructs
Shared Memory Programming
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
Operating Systems Principles Process Management and Coordination Lecture 2: Processes and Their Interaction 主講人:虞台文.
Chapter 6: Synchronization Tools
Presentation transcript:

2. Processes and Interactions 2.1 The Process Notion 2.2 Defining and Instantiating Processes Precedence Relations Implicit Process Creation Dynamic Creation With fork And join Explicit Process Declarations 2.3 Basic Process Interactions Competition: The Critical Section Problem Cooperation 2.4 Semaphores Semaphore Operations and Data Mutual Exclusion Producer/Consumer Situations 2.5 Event Synchronization CompSci 143A Spring, 2013

Processes A process is the activity of executing a program on a CPU. Also, called a task. Conceptually… Each process has its own CPU Processes are running concurrently Physical concurrency = parallelism This requires multiple CPUs Logical concurrency = time-shared CPU Processes cooperate (shared memory, messages, synchronization) Processes compete for resources CompSci 143A Spring, 2013

Advantages of Process Structure Hardware-independent solutions Processes cooperate and compete correctly, regardless of the number of CPUs Structuring mechanism Tasks are isolated with well-defined interfaces CompSci 143A Spring, 2013

Defining/Instantiating Processes Need to Define what each process does Specify precedence relations: when processes start executing and stop executing, relative to each other Create processes CompSci 143A Spring, 2013

Specifying precedence relations Process-flow graphs (unrestricted) Properly nested expressions/graphs (also known as series-parallel graphs) CompSci 143A Spring, 2013

Process flow graphs Directed graphs Edges represent processes Vertices represent initiation, termination of processes CompSci 143A Spring, 2013

Examples of Precedence Relationships (Process Flow Graphs) Figure 2-1 CompSci 143A Spring, 2013

Process flow graphs (a + b) * (c + d) - (e / f) gives rise to Figure 2-2 CompSci 143A Spring, 2013

(Unrestricted) Process flow graphs Any directed acylic graph (DAG) corresponds to an unrestricted process flow graph, and conversely May be too general (like unrestricted goto in sequential programming) CompSci 143A Spring, 2013

Properly nested expressions Two primitives, which can be nested: Serial execution Expressed as S(p1, p2, …) Execute p1, then p2, then … Parallel execution Expressed as P(p1, p2, …) Concurrently execute p1, p2, A graph is properly nested if it corresponds to a properly nested expression CompSci 143A Spring, 2013

Examples of Precedence Relationships (Process Flow Graphs) Figure 2-1 CompSci 143A Spring, 2013

Properly nested process flow graphs (c) corresponds to the properly nested expression S(p1, P(p2, S(p3, P(p4, p5)), p6), P(p7, p8)) (d) is not properly nested (proof: text, page 44) CompSci 143A Spring, 2013

Process Creation Implicit process creation Explicit process creation cobegin // coend, forall statement Explicit process creation fork/join Explicit process declarations/classes CompSci 143A Spring, 2013

Implicit Process Creation Processes are created dynamically using language constructs. Process is not explicitly declared or initiated cobegin/coend statement Data parallelism: forall statement CompSci 143A Spring, 2013

Cobegin/coend statement syntax: cobegin C1 // C2 // … // Cn coend meaning: All Ci may proceed concurrently When all of the Ci’s terminate, the statement following the cobegin/coend can proceed cobegin/coend statements have the same expressive power as S/P notation S(a,b)  a; b (sequential execution by default) P(a,b)  cobegin a // b coend CompSci 143A Spring, 2013

cobegin/coend example Figure 2-4 cobegin Time_Date // Mail // { Edit; { Compile; Load; Execute} // { Edit; cobegin Print // Web coend} coend } CompSci 143A Spring, 2013

Data parallelism Same code is applied to different data The forall statement syntax: forall (parameters) statements Meaning: Parameters specify set of data items Statements are executed for each item concurrently CompSci 143A Spring, 2013

Example of forall statement Example: Matrix Multiply forall ( i:1..n, j:1..m ) { A[i][j] = 0; for ( k=1; k<=r; ++k ) A[i][j] = A[i][j] + B[i][k]*C[k][j]; } Each inner product is computed sequentially All inner products are computed in parallel CompSci 143A Spring, 2013

Explicit Process Creation Using fork/join Explicit process declarations/classes CompSci 143A Spring, 2013

Explicit program creation: fork/join cobegin/coend are limited to properly nested graphs forall is limited to data parallelism fork/join can express arbitrary functional parallelism (any process flow graph) CompSci 143A Spring, 2013

The fork and join primitives Syntax: fork x Meaning: create new process that begins executing at label x Syntax: join t,y Meaning: t = t–1; if (t==0) goto y; The operation must be indivisible. (Why?) CompSci 143A Spring, 2013

fork / join example Example: Graph in Figure 2-1(d) t1 = 2; t2 = 3; p1; fork L2; fork L5; fork L7; quit; L2: p2; fork L3; fork L4; quit; L5: p5; join t1,L6; quit; L7: p7; join t2,L8; quit; L4: p4; join t1,L6; quit; L3: p3; join t2,L8; quit; L6: p6; join t2,L8; quit; L8: p8; quit; CompSci 143A Spring, 2013

The Unix fork statement procid = fork() Replicates calling process Parent and child are identical except for the value of procid Use procid to diverge parent and child: if (procid==0)do_child_processing else do_parent_processing CompSci 143A Spring, 2013

Explicit Process Declarations Designate piece of code as a unit of execution Facilitates program structuring Instantiate: Statically (like cobegin) or Dynamically (like fork) CompSci 143A Spring, 2013

Explicit Process Declarations process p process p1 declarations_for_p1 begin ... end process type p2 declarations_for_p2 begin ... q = new p2; end CompSci 143A Spring, 2013

Thread creation in Java Define a runnable class Class MyRunnable implements runnable { … run() {…} } Instantiate the runnable, instantiate and start a thread that runs the runnable Runnable r = new MyRunnable(); Thread t = new Thread(r); t.start(); CompSci 143A Spring, 2013

Process Interactions Competition/Mutual Exclusion Cooperation Example: Two processes both want to access the same resource. Cooperation Example: Producer  Buffer  Consumer CompSci 143A Spring, 2013

Process Interactions Competition: The Critical Section Problem x = 0; cobegin p1: … x = x + 1; … // p2: … Coend After both processes execute , we should have x=2 CompSci 143A Spring, 2013

The Critical Section Problem Interleaved execution (due to parallel processing or context switching) p1: R1 = x; p2: … R2 = x; R1 = R1 + 1; R2 = R2 + 1; x = R1 ; … x = R2; x has only been incremented once. The first update (x=R1) is lost. CompSci 143A Spring, 2013

The Critical Section Problem Problem statement: cobegin p1: while(1) {CS_1; program_1;} // p2: while(1) {CS_2; program_2;} ... pn: while(1) {CS_n; program_n;} coend Guarantee mutual exclusion: At any time, at most one process should be executing within its critical section (Cs_i). CompSci 143A Spring, 2013

The Critical Section Problem In addition to mutual exclusion, prevent mutual blocking: 1. Process outside of its CS must not prevent other processes from entering its CS. (No “dog in manger”) 2. Process must not be able to repeatedly reenter its CS and starve other processes (fairness) 3. Processes must not block each other forever (no deadlock) 4. Processes must not repeatedly yield to each other (“after you”--“after you”) (no livelock) CompSci 143A Spring, 2013

The Critical Section Problem Solving the problem is subtle We will examine a few incorrect solutions before describing a correct one: Peterson’s algorithm CompSci 143A Spring, 2013

Algorithm 1 Use a single turn variable: int turn = 1; cobegin p1: while (1) { while (turn != 1); /*wait*/ CS_1; turn = 2; program_1; } // p2: while (1) { while (turn != 2); /*wait*/ CS_2; turn = 1; program_2; coend Violates blocking requirement (1), “dog in manger” CompSci 143A Spring, 2013

Algorithm 2 Use two variables. c1=1 when p1 wants to enter its CS. c2=1 when p2 wants to enter its CS. int c1 = 0, c2 = 0; cobegin p1: while (1) { c1 = 1; while (c2); /*wait*/ CS_1; c1 = 0; program_1; } // p2: while (1) { c2 = 1; while (c1); /*wait*/ CS_2; c2 = 0; program_2; } coend Violates blocking requirement (3), deadlock. Processes may wait forever. CompSci 143A Spring, 2013

Algorithm 3 Like #2, but reset intent variables (c1 and c2) each time: int c1 = 0, c2 = 0; cobegin p1: while (1) { c1 = 1; if (c2) c1 = 0; //go back, try again else {CS_1; c1 = 0; program_1} } // p2: while (1) { c2 = 1; if (c1) c2 = 0; //go back, try again else {CS_2; c2 = 0; program_2} } coend Violates blocking requirements (2) and (4), fairness and livelock CompSci 143A Spring, 2013

Peterson’s algorithm Processes indicate intent to enter CS as in #2 and #3 (using c1 and c2 variables) After a process indicates its intent to enter, it (politely) tells the other process that it will wait (using the willWait variable) It then waits until one of the following two conditions is true: The other process is not trying to enter; or The other process has said that it will wait (by changing the value of the willWait variable.) CompSci 143A Spring, 2013

Peterson’s Algorithm int c1 = 0, c2 = 0, willWait; cobegin p1: while (1) { c1 = 1; willWait = 1; while (c2 && (willWait==1)); /*wait*/ CS_1; c1 = 0; program_1; } // p2: while (1) { c2 = 1; willWait = 2; while (c1 && (willWait==2)); /*wait*/ CS_2; c2 = 0; program_2; coend Guarantees mutual exclusion and no blocking Assumes there are only 2 processes CompSci 143A Spring, 2013

Another algorithm for the critical section problem: the Bakery Algorithm Based on “taking a number” as in a bakery or post office Process chooses a number larger than the number held by all other processes Process waits until the number it holds is smaller than the number held by any other process trying to get in to the critical section CompSci 143A Spring, 2013

Code for Bakery Algorithm (First cut) int number[n]; //shared array. All entries initially set to 0 //Code for process i. Variables j and x are local (non-shared) variables while(1) { program_i // Step 1: choose a number x = 0; for (j=0; j < n; j++) if (j != i) x = max(x,number[j]); number[i] = x + 1; // Step 2: wait until the chosen number is the smallest outstanding number if (j != i) wait until ((number[j] == 0) or (number[i] < number[j])) CS_i number[i] = 0; } CompSci 143A Spring, 2013

Bakery algorithm, continued Complication: there could be ties in step 1. This would cause a deadlock (why?) Solution: if two processes pick the same number, give priority to the process with the lower process number. CompSci 143A Spring, 2013

Correct code for Bakery Algorithm int number[n]; //shared array. All entries initially set to 0 //Code for process i. Variables j and x are local (non-shared) variables while(1) { program_i // Step 1: choose a number x = 0; for (j=0; j < n; j++) if (j != i) x = max(x,number[j]); number[i] = x + 1; // Step 2: wait until the chosen number is the smallest outstanding number if (j != i) wait until ((number[j] == 0) or (number[i] < number[j]) or ((number[i] = number[j]) and (i < j))) CS_i number[i] = 0; } CompSci 143A Spring, 2013

Software solutions to Critical Section problem Drawbacks Difficult to program and to verify Processes loop while waiting (busy-wait). Wastes CPU time. Applicable to only to critical section problem: (competition for a resource). Does not address cooperation among processes. Alternative solution: special programming constructs (semaphores, events, monitors, …) CompSci 143A Spring, 2013

Semaphores A semaphore s is a nonnegative integer Operations P and V are defined on s Semantics: P(s): if s>0, decrement s and proceed; else wait until s>0 and then decrement s and proceed V(s): increment s by 1 Equivalent Semantics: P(s): while (s<1)/*wait*/; s=s-1 V(s): s=s+1; The operations P and V are atomic (indivisible) operations CompSci 143A Spring, 2013

Notes on semaphores Invented by Dijkstra As we will see in Chapter 4, the waiting in the P operation can be implemented by Blocking the process, or Busy-waiting Etymology: P(s), often written Wait(s); think “Pause”: “P” from “passaren” (“pass” in Dutch) or from “prolagan,” combining “proberen” (“try”) and “verlagen” (“decrease”). V(s), often written Signal(s): think of the “V for Victory” 2-finger salute: “V” from “vrigeven” (“release”) or “verhogen” (“increase”). CompSci 143A Spring, 2013

Mutual Exclusion w/ Semaphores semaphore mutex = 1; cobegin p1: while (1) { P(mutex); CS1;V(mutex);program1;} // p2: while (1) { P(mutex);CS2;V(mutex);program2;} ... pn: while (1) { P(mutex);CSn;V(mutex);programn;} coend; CompSci 143A Spring, 2013

Cooperation Cooperating processes must also synchronize Example: P1 waits for a signal from P2 before P1 proceeds. Classic generic scenario: Producer  Buffer  Consumer CompSci 143A Spring, 2013

Signal/Wait with Semaphores semaphore s = 0; cobegin p1: ... P(s); /* wait for signal */ ... // p2: ... V(s); /* send signal */ coend; CompSci 143A Spring, 2013

Bounded Buffer Problem semaphore e = n, f = 0, b = 1; cobegin Producer: while (1) { Produce_next_record; P(e); P(b); Add_to_buf; V(b); V(f); } // Consumer: while (1) { P(f); P(b); Take_from_buf; V(b); V(e); Process_record; coend CompSci 143A Spring, 2013

Events An event designates a change in the system state that is of interest to a process Usually triggers some action Usually considered to take no time Principally generated through interrupts and traps (end of an I/O operation, expiration of a timer, machine error, invalid address…) Also can be used for process interaction Can be synchronous or asynchronous CompSci 143A Spring, 2013

Synchronous Events Process explicitly waits for occurrence of a specific event or set of events generated by another process Constructs: Ways to define events E.post (generate an event) E.wait (wait until event is posted) Can be implemented with semaphores Can be “memoryless” (posted event disappears if no process is waiting). CompSci 143A Spring, 2013

Asynchronous Events Must also be defined, posted Process does not explicitly wait Process provides event handlers Handlers are evoked whenever event is posted CompSci 143A Spring, 2013

Event synchronization in UNIX Processes can signal conditions using asynchronous events: kill(pid, signal) Possible signals: SIGHUP, SIGILL, SIGFPE, SIGKILL, … Process calls sigaction() to specify what should happen when a signal arrives. It may catch the signal, with a specified signal handler ignore signal Default action: process is killed Process can also handle signals synchronously by blocking itself until the next signal arrives (pause() command). CompSci 143A Spring, 2013

Case study: Event synch. (cont) Windows 2000 WaitForSingleObject or WaitForMultipleObjects Process blocks until object is signaled object type signaled when: process all threads complete thread terminates semaphore incremented mutex released event posted timer expires file I/O operation terminates queue item placed on queue CompSci 143A Spring, 2013

Originally developed by Steve Franklin History Originally developed by Steve Franklin Modified by Michael Dillencourt, Summer, 2007 Modified by Michael Dillencourt, Spring, 2009 Modified by Michael Dillencourt, Winter, 2010 Modified by Michael Dillencourt, Summer, 2012 CompSci 143A Spring, 2013