Steve’s Concurrency Slides

Slides:



Advertisements
Similar presentations
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
Advertisements

Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Intertask Communication and Synchronization In this context, the terms “task” and “process” are used interchangeably.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
6: Process Synchronization 1 1 PROCESS SYNCHRONIZATION I This is about getting processes to coordinate with each other. How do processes work with resources.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
6/16/2015 Chapter Eight Process Synchronisation. Index Objectives Concurrent processes and Asynchronous concurrent processes Process synchronisation Mutual.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
IPC and Classical Synchronization Problems
Slide 8-1 Copyright © 2004 Pearson Education, Inc. Basic Synchronization Principles.
Basic Synchronization Principles. Concurrency Value of concurrency – speed & economics But few widely-accepted concurrent programming languages (Java.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
© 2004, D. J. Foreman 1 Mutual Exclusion © 2004, D. J. Foreman 2  Mutual exclusion ■ Critical sections ■ Primitives  Implementing it  Dekker's alg.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Semaphores and Bounded Buffer Andy Wang Operating Systems COP 4610 / CGS 5765.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Internet Software Development Controlling Threads Paul J Krause.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
1.  Independent process  Cannot affect or be affected by the other processes in the system  Does not share any data with other processes  Interacting.
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Basic Synchronization Principles 1. 2  Independent process  Cannot affect or be affected by the other processes in the system  Does not share any.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization: Semaphores
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
Background on the need for Synchronization
More on Classic Synchronization Problems, Monitors, and Deadlock
Chapter 5: Process Synchronization
Basic Synchronization Principles
143a discussion session week 3
Basic Synchronization Principles
The Critical-Section Problem (Two-Process Solution)
Process Synchronization
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Steve’s Concurrency Slides
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6 Synchronization Principles
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
Outline Announcements Basic Synchronization Principles – continued
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Outline Please turn in Homework #2 Announcements
CSE 542: Operating Systems
Presentation transcript:

Steve’s Concurrency Slides

What is Synchronization? Ability of Two or More Serial Processes to Interact During Their Execution to Achieve Common Goal Recognition that “Today’s” Applications Require Multiple Interacting Processes Client/Server and Multi-Tiered Architectures Inter-Process Communication via TCP/IP Fundamental Concern: Address Concurrency Control Access to Shared Information Historically Supported in Database Systems Currently Available in Many Programming Languages

Thread Synchronization Suppose X and Y are Concurrently Executing in Same Address Space What are Possibilities? What Does Behavior at Left Represent? Synchronous Execution! X Does First Part of Task Y Next Part Depends on X X Third Part Depends on Y Threads Must Coordinate Execution of Their Effort X Y 1 2 3

Thread Synchronization Now, What Does Behavior at Left Represent? Asynchronous Execution! X Does First Part of Task Y Does Second Part Concurrent with … X Doing Third Part What are Issues? X Y 1 2 3 Will Second Part Still Finish After Third Part? Will Second Part Now Finish Before Third Part? What Happens if Variables are Shared?

What are Potential Problems without Synchronization? Data Inconsistency Lost-Update Problem Impact on Correctness of Executing Software Deadlock Two Processes Each Hold Unique Resource and Want Resource of Other Process Processes Wait Forever Non-Determinacy of Computations Behavior of Computation Different for Different Executions Two Process Produce Different Results When Executed More than Once on Same Data

What are Classic Synchronization Techniques? Goal: Shared Variables and Resources Two Approaches: Critical Sections Define a Segment of Code as Critical Section Once Execution Enters Code Segment it Cannot be Interrupted by Scheduler Release Point for Critical Section for Interrupts Semaphores Proposed in 1960s by E. Dijkstra Utilizes ADTs to Design and Implement Behavior that Guarantees Consistency and Non-Deadlock

Critical Sections Two Processes Share “balance” Data Item Remember, Assignment is Not Atomic, but May Correspond to Multiple Assembly Instructions What is Potential Problem? shared float balance; Code for p1 Code for p2 . . . . . . balance = balance + amount; balance = balance - amount; p1 p2 balance

Critical Sections Recall the Code Below What Happens if Time Slice Expires at Arrow and an Interrupt is Generated? shared double balance; Code for p1 Code for p2 . . . . . . balance = balance + amount; balance = balance - amount; Code for p1 Code for p2 load R1, balance load R1, balance load R2, amount load R2, amount add R1, R2 sub R1, R2 store R1, balance store R1, balance

Critical Sections (continued) There is a Race to Execute Critical Sections Sections May Be Different Code in Different Processes: Cannot Detect With Static Analysis Results of Multiple Execution Are Not Determinate Need an OS Mechanism to Resolve Races If p1 Wins, R1 and R2 Added to Balance - Okay If p2 Wins, its Changed Balance Different from One Held by p1 which Adds/Writes Wrong Value Code for p1 Code for p2 load R1, balance load R1, balance load R2, amount load R2, amount add R1, R2 sub R1, R2 store R1, balance store R1, balance

One Solution: Disabling Interrupts In Code Below, Disable Interrupts What are Potential Problems? Interrupts Could Be Disabled Arbitrarily Long Were Trusting Software Engineers Unintentional(Infinite Loop) or Malicious Code Concurrent PLs Enforce Primitives at Compile Really Only Want to Prevent P1 and P2 From Interfering With One Another Another Solution: Using Shared “Lock” Variable shared double balance; Code for p1 Code for p2 disableInterrupts(); disableInterrupts(); balance = balance + amount; balance = balance - amount; enableInterrupts(); enableInterrupts();

Using a Lock Variable Shared lock Flag Initialized to False First Process to be Active Sets lock to TRUE Enter Critical Section Change balance Set lock to FALSE Leave Critical Section Where are Two Trouble Spots? shared boolean lock = FALSE; shared double balance; Code for p1 /* Acquire the lock */ while(lock) ; lock = TRUE; /* Execute critical sect */ balance = balance + amount; /* Release lock */ lock = FALSE; Code for p2 What if Interrupt Occurs Here? What if Interrupt Occurs Here?

Using a Lock Variable p2 p1 shared boolean lock = FALSE; shared double balance; Code for p1 /* Acquire the lock */ while(lock) ; lock = TRUE; /* Execute critical sect */ balance = balance + amount; /* Release lock */ lock = FALSE; Code for p2 /* Acquire the lock */ while(lock) ; lock = TRUE; /* Execute critical sect */ balance = balance + amount; /* Release lock */ lock = FALSE; Blocked at while p2 p1 lock = TRUE Interrupt Interrupt lock = FALSE Interrupt

Lock Manipulation Consider the Following enter/exit Primitives that Will be Utilized to Insure No Interrupts During How Does Code Below Insure that Interrupts are Disabled? enter(lock) { exit(lock) { disableInterrupts(); disableInterrupts(); /* Loop until lock is TRUE */ lock = FALSE; while(lock) { enableInterrupts(); /* Let interrupts occur */ } enableInterrupts(); disableInterrupts(); } lock = TRUE; What Occurs Here? Remember, Need Location for Interrupts to Occur During Attempt to Acquire Lock!

Deadlock Why and How Does Deadlock Occur? Code for p1 Code for p2 shared boolean lock1 = FALSE; lock2 = FALSE;shared list L; Code for p1 Code for p2 . . . . . . /* Enter CS to delete elt */ /* Enter CS to upd len */ enter(lock1); enter(lock2); <delete element>; <update length>; <intermediate computation>; <intermediate comp> /* Enter CS to update len */ /* Enter CS to add elt */ enter(lock2); enter(lock1); <update length>; <add element>; /* Exit both CS */ /* Exit both CS */ exit(lock1); exit(lock2); exit(lock2); exit(lock1); . . . . . . Why and How Does Deadlock Occur?

Dijkstra Semaphore Classic Paper Describes Several Software Attempts to Solve the Problem (See Exercise 4, Section 8.6) Dijkstra First Found a Software Solution Then Proposed Simpler Hardware-Based Solution What is a Semaphore S? A Nonnegative Integer Variable Can Only Be Changed or Tested by Two Indivisible Functions: For V(s), No Interrupt is Possible For P(s), Interrupt During {wait}; Only V(s): [s = s + 1] P(s): [while(s == 0){wait}; s = s - 1]

How Do P & V Work? What is the Initial Value of Binary Semaphore? Initial Value Set as Part of Shared Variable semaphore mutex = 1; Followed by Spawning of Two or More Processes What Does it Mean When mutex = 0? Process X Executed P(mutex) with mutex = 1 What About Other Processes Trying P(mutex)? Others Wait Until Process X Does V(mutex) If X is Interrupt Before V, Others Still Wait V(mutex): [mutex = mutex + 1] P(mutex): [while(mutex == 0){wait}; mutex = mutex - 1]

Using Semaphores to Solve the Canonical Problem proc_0() { proc_1() { while(TRUE) { while(TRUE { <compute section>; <compute section>; P(mutex); P(mutex); <critical section>; <critical section>; V(mutex); V(mutex); } } } } semaphore mutex = 1; // Shared variable fork(proc_0, 0); fork(proc_1, 0); What if First P(mutex); Called by proc_0? s = 1 while false so s = 0 What if Interrupt Occurs in CS of proc_0? proc_1() Waits Until proc_0 Resumes When CS Finishes V is Fired to Set s = 1 V(s):[s = s + 1] P(s):[while(s==0) {wait}; s = s - 1]

Shared Account Problem proc_0() { proc_1() { . . . . . . /* Enter the CS */ /* Enter the CS */ P(mutex); P(mutex); balance += amount; balance -= amount; V(mutex); V(mutex); } } semaphore mutex = 1; // Shared variable fork(proc_0, 0); fork(proc_1, 0); Assume Following Values: balance = 200 amount = 50 amount = 75 What are Possible Results? Is there a Lost Update? V(s):[s = s + 1] P(s):[while(s==0) {wait}; s = s - 1]

If proc_0 Interrupted Waits proc_1 balance = 200 amount = 50 amount = 75 V(s):[s = s + 1] P(s):[while(s==0) {wait};s=s-1] proc_0() { proc_1() { . . . . . . /* Enter the CS */ /* Enter the CS */ P(mutex); P(mutex); balance += amount; balance -= amount; V(mutex); V(mutex); } } proc_0 Executes P First If proc_0 Interrupted Waits proc_1 Eventually proc_0 Resumes balance = 250 proc_0 Executes V proc_1 Executes P Sets balance = 175 proc_1 Executes V proc_1 Executes P First If proc_1 Interrupted proc_0 Waits Eventually proc_1 Resumes balance = 125 proc_1 Executes V proc_0 Executes P Sets balance = 175 proc_0 Executes V

Two Shared Variables What is Execution Process That Guarantees proc_A() { while(TRUE) { <compute section A1>; update(x); /* Signal proc_B */ V(s1); <compute section A2>; /* Wait for proc_B */ P(s2); retrieve(y); } semaphore s1 = 0; semaphore s2 = 0; fork(proc_A, 0); fork(proc_B, 0); proc_B() { while(TRUE) { /* Wait for proc_A */ P(s1); retrieve(x); <compute section B1>; update(y); /* Signal proc_A */ V(s2); <compute section B2>; } What is Execution Process That Guarantees x and y are Read After They’re Written?

Execution Process Two Shared Variables integer x,y; semaphore s1=0; semaphore s2=0; proc_A() { while(TRUE) { <compute section A1>; update(x); /* Signal proc_B */ V(s1); <compute section A2>; /* Wait for proc_B */ P(s2); retrieve(y); } P(s1) will not Execute Until V(s1) sets s1 to 1 x Written Before Read proc_B() { while(TRUE) { /* Wait for proc_A */ P(s1); retrieve(x); <compute section B1>; update(y); /* Signal proc_A */ V(s2); <compute section B2>; } P(s2) will not Execute Until V(s2) sets s2 to 1 y Written Before Read Doesn’t Matter if Interrupt Occurs Anywhere Read of Written Data Only After Semaphore Secured

Bounded Buffer Producer/Consumer Model Processes Communicate by Producer Obtains Empty Buffer fr. Pool Producer Fills Buffer & Places in Full Pool Consumer Obtains Info. By Picking Buffer fr. Full Pool Consumer Copies Info Out of Buffer Consumer Places Buffer in Empty Pool N Fixed Number of Buffers Utilize Counting/Binary Semaphores Producer Consumer Empty Pool Full Pool

Bounded Buffer consumer() { producer() { buf_type *next, *here; while(TRUE) { /* Claim full buffer */ P(full); P(mutex); here = obtain(full); V(mutex); copy_buffer(here, next); release(here, emptyPool); /* Signal an empty buffer */ V(empty); consume_item(next); } producer() { buf_type *next, *here; while(TRUE) { produce_item(next); /* Claim an empty */ P(empty); P(mutex); here = obtain(empty); V(mutex); copy_buffer(next, here); release(here, fullPool); /* Signal a full buffer */ V(full); } semaphore mutex = 1; /* A binary semaphore */ semaphore full = 0; /* A general (counting) semaphore */ semaphore empty = N; /* A general (counting) semaphore */ buf_type buffer[N]; fork(producer, 0); fork(consumer, 0);

Execution Process Bounded Buffer semaphore mutex = 1; semaphore full = 0; semaphore empty = N; buf_type buffer[N]; producer() { buf_type *next, *here; while(TRUE) { produce_item(next); /* Claim an empty */ P(empty); P(mutex); here = obtain(empty); V(mutex); copy_buffer(next, here); release(here, fullPool); /* Signal a full buffer */ V(full); } Producers Want Empty Buffers Consumers Want Full Buffers Distinct Empty Buffer Goes to Only 1 Producer consumer() { buf_type *next, *here; while(TRUE) { /* Claim full buffer */ P(full); P(mutex); here = obtain(full); V(mutex); copy_buffer(here, next); release(here, emptyPool); /* Signal an empty buffer */ V(empty); consume_item(next); } Protect Changes to Full Pool Only One Process Accesses Full Pool Full Buffer Now Available Empty Buffer Now Available Protect Changes to Empty Pool