CIS 720 Message Passing.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
Parallel Processing & Parallel Algorithm May 8, 2003 B4 Yuuki Horita.
Operating Systems Mehdi Naghavi Winter 1385.
Models of Concurrency Manna, Pnueli.
Deadlock Prevention, Avoidance, and Detection
Synchronization and Deadlocks
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Deadlock Prevention, Avoidance, and Detection.  The Deadlock problem The Deadlock problem  Conditions for deadlocks Conditions for deadlocks  Graph-theoretic.
1 Synchronization 2: semaphores and more… 1 Operating Systems, 2011, Danny Hendler & Amnon Meisels.
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
The Spin Model Checker Promela Introduction Nguyen Tuan Duc Shogo Sawai.
Classic Synchronization Problems
Chapter 6 Concurrency: Deadlock and Starvation
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
© 2011 Carnegie Mellon University SPIN: Part Bug Catching: Automated Program Verification and Testing Sagar Chaki November 2, 2011.
Remote Procedure Call in SR Programming Language By Tze-Kin Tsang 3/20/2000.
Concurrency: Deadlock and Starvation Chapter 6. Revision Describe three necessary conditions for deadlock Which condition is the result of the three necessary.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Process Synchronization
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
© 2004, D. J. Foreman 1 High Level Synchronization and Inter-Process Communication.
1 Chapter 2.3 : Interprocess Communication Process concept  Process concept  Process scheduling  Process scheduling  Interprocess communication Interprocess.
Chapter 6 Semaphores.
OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL Yan hao (Wilson) Wu University of the Western.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
CIS825 Lecture 2. Model Processors Communication medium.
 In computer programming, a loop is a sequence of instruction s that is continually repeated until a certain condition is reached.  PHP Loops :  In.
CIS 720 Asynchronous Message Passing. Asynchronous message passing Send is non-blocking Modeling a channel c: 1.Two variables: sent c, recv c 2.Queue.
Fall 2000M.B. Ibáñez Lecture 08 High Level mechanisms for process synchronization Critical Regions Monitors.
Lecture 4 Introduction to Promela. Promela and Spin Promela - process meta language G. Holzmann, Bell Labs (Lucent) C-like language + concurrency dyamic.
Software Systems Verification and Validation Laboratory Assignment 4 Model checking Assignment date: Lab 4 Delivery date: Lab 4, 5.
CIS 725 Lecture 2. Finite State Machine Model FSM = (A, S, T, s 0 ) A = set of actions S = set of states s 0 = initial states T = transition relation.
CIS 720 Asynchronous Message Passing. Dining philosophers problem P 0 : do hungry acquire left and right fork eat release forks sleep od.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
© 2004, D. J. Foreman 1 Monitors and Inter-Process Communication.
Informationsteknologi Monday, October 1, 2007Computer Systems/Operating Systems - Class 111 Today’s class Deadlock.
November COMP60621 Designing for Parallelism Lecture 14 Deadlock + Channels in Promela John Gurd, Graham Riley Centre for Novel Computing School.
Interprocess Communication Race Conditions
Chapter 5: Process Synchronization – Part 3
“Language Mechanism for Synchronization”
COMP60611 Fundamentals of Parallel and Distributed Systems
Distributed Mutex EE324 Lecture 11.
Lecture 16: Readers-Writers Problem and Message Passing
Lecture 25 Syed Mansoor Sarwar
Threading And Parallel Programming Constructs
Outline Distributed Mutual Exclusion Introduction Performance measures
Chapter 7: Synchronization Examples
CIS 720 Message Passing.
Semaphores Chapter 6.
Concurrency: Mutual Exclusion and Process Synchronization
Subject : T0152 – Programming Language Concept
Chapter 6 Synchronization Principles
Lecture 16: Readers-Writers Problem and Message Passing
Understanding Conditions
CIS 720 Message Passing.
Life is Full of Alternatives
CIS 720 Lecture 5.
CIS 720 Lecture 3.
Asynchronous Message Passing
CIS 720 Lecture 3.
CIS 720 Lecture 5.
Asynchronous Message Passing
Monitors and Inter-Process Communication
Asynchronous Message Passing
Distributed Mutual eXclusion
Chapter 6 – Distributed Processing and File Systems
Presentation transcript:

CIS 720 Message Passing

Message passing systems Send statements Receive statements

Process naming P ! m(para_list): send message m with parameters para_list to P. send(m(para_list, P) P ? m(para_list): receive message m from P, and assign the received parameters to variables in para_list. receive(m(para_list), P)

Channel naming send(m(para_list), ch) or ch!m(para_list) receive(m(para_list), ch) or ch?m(para_list)

A: amount of synchronization and Buffering

Minimal synchronization: 1 Non-blocking send: 1 + 6 Blocking send: 1 + 2 + 6 Reliable blocking send: 1 + 2 + 5 + 6 Synchronous send: 1 + 2 + 3 + 4 + 5 + 6

Synchronous communication Send statement: A ! m(expr): the send statement blocks until the receive statement is also enabled. Receive statement B ? m(var): the receive statement blocks until the send statement is also enabled. When matching send and receive statements are enabled, then var is assigned the value of expr

A pair of send and receive statements are matching if The send statement appears in the process named in the receive statement. The receive statement appears in the process named in the send statement. var = expr is a valid assignment statement The type of the messages match

P1: P2: y := 2 z := 1 P2 ! y P1 ? w P2 ? y P1 ! w + z

P1: P2: P3: P3 ? y P3 ! w + z P1 ! x y := 2 z := 1 x = 1 P2 ! y P1 ? W P2 ? x P3 ? y P3 ! w + z P1 ! x

P1: P2: y := 2 z := 1 P2 ! y P1 ! w P2 ? y P1 ? w + z Deadlock

Array copying program A[0..N-1] B[0..N-1] i = 0; P1: P2: od od do do i < n P2 ! A[i]; i < n  P1 ? B[i] i = i + 1 od od

Array copying program A[0..N-1] B[0..N-1] i = 0; P1: P2: od od do do i < n P2 ! A[i]; i < n  P1 ? B[i] i = i + 1 od od This could deadlock because when i = n-1, P2 could check “i < n” before i is incremented.

Array copying program A[0..N-1] B[0..N-1] i = 0; j = 0 P1: P2: od od do do i < n P2 ! A[i]; j < n  P1 ? B[j]; i = i + 1 j = j + 1 od od

Satisfaction condition one must show the following satisfaction condition for each pair of matching send and receive statements: P1 P2 : : {P} {Q} P2!expr P1?var {U} {V}

P1: P2: {true} {true} {w = 2} {z = 1 /\ w = 2} P2 ? y P1 ! w + z y := 2 z := 1 {y = 2} {z = 1} P2 ! y P1 ? w {w = 2} {z = 1 /\ w = 2} P2 ? y P1 ! w + z {y = 3} {y = 3} { y = 2 /\ z = 1} w = y { w = 2 /\ z = 1} { w = 2 /\ z = 1} y = w + z { y = 3}

Guarded Communication P ? x /\ bool  action Guard evaluates to true if bool is true executing P ? x will not cause delay Guard evaluates to false if bool is false Guard blocks if bool is true but executing P ? x will cause delay

Mutual exclusion C: P1: P2: do do do P1?req  P1?rel C!req; C!req [] cs cs P2?req  P2?rel C!rel; C!rel od od od

Example…. P1: P2: v1 = 1; v2 = ? v3 = ?; v4 = 2 if if fi fi

Example…. P1: P2: v1 = 1; v2 = ? v3 = ?; v4 = 2 if if P2 ! v1  P2 ? v2 P1 ! v4  P1 ? v3 [] [] P2 ? v2  P2 ! v1 P1 ? v3  P1 ! v4 fi fi

Computing the minimum value D: num=0; m = Max; A: B: C: do A ? v  m = min(m,v);num++ D!a; D!b D!c [] D?a D?b D?c B ? v  m = min(m,v);num++ [] C ? v  m = min(m,v);num++ num = 3  A!m; B!m; C!m od

Computing the minimum value A: a=initial value; num1 = 0 do B ? v  a = min(a,v);num1++ [] C ? v  a = min(a,v);num1++ B ! a  skip; num1++ C ! a  skip; num1++ num1 = 4  exit; od B: b=initial value; num2 = 0 do A ? v  b = min(b,v);num2++ [] C ? v  b = min(b,v);num2++ A ! a  skip; num2++ C ! a  skip; num2++ num2 = 4  exit; od

Centralized Semaphore C: A: B: do do do A ? p  A ? v C!p; C!p [] cs cs B ? p  B ? v C!v; C!v od od od

Centralized Semaphore C: sem =1 A: B: do do do sem > 0; A ? p sem-- C!p; C!p [] cs cs sem> 0; B ? p  sem-- C!v; C!v [] od od sem=0; A ? v  sem++ [] sem=0; B ? v  sem++ od

Dining philosophers problem do hungry acquire left and right fork eat release forks sleep od