Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Interprocess Communication
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Synchronization Coordinating the Activity of Mostly Independent Entities.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
The Structure of the “THE” -Multiprogramming System Edsger W. Dijkstra Jimmy Pierce.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
General System Architecture and I/O.  I/O devices and the CPU can execute concurrently.  Each device controller is in charge of a particular device.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Concurrency, Mutual Exclusion and Synchronization.
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Silberschatz, Galvin and Gagne  Applied Operating System Concepts Chapter 2: Computer-System Structures Computer System Architecture and Operation.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Chapter 7 - Interprocess Communication Patterns
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
PROCESS MANAGEMENT IN MACH
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Chapter 5: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Shared Memory Programming
Background and Motivation
Concurrency: Mutual Exclusion and Process Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming

A Little History Since the beginning, differences in device speed caused lots of processor idle time. First concurrent systems introduced interrupt- driven I/O: – system keeps track of which processes are waiting for I/O to complete. –OS hands off I/O task, and assigns processor to a runnable process. –When I/O task is complete, process sends interrupt signal to OS. OS schedules it for processor time.

Main problem: Synchronization If interrupt occurs while OS is updating a data structure, another process may see the data in an inconsistent state: a race condition.

A Concurrency Application (WWW) A modern-day web browser parses HTML commands, and creates many threads. User can access new items, edit bookmarks, etc. while browser renders portions of a page. Browser “forks” off processes to format and render frames.

Sample Web Browser Code

Communication and Synchronization Main issues in concurrency. Synchronization: control order of events. Communication: signaling information from one process to another. Two main communication models: –Message-passing model: threads communicate by explicitly sending messages. Action taken upon receipt. –Shared-memory model: threads communicate by reading/storing data in a common location.

Communication and Synchronization In either communication model, –synchronization performed by either: Busy-wait: run loop, keep checking until a certain condition holds. Blocking: relinquish processor, leave a note. Later, some other thread that changes the condition will request a “wakeup”.

Co-begin Construct in Algol 68 begin a := 3, comma: either order. b := 4 end par begin a := 3, ‘par’: can run in parallel b := 4 end

Parallel Loops in SR co (i:= 5 to 10) p (a, b, i) oc Six instances of p, can run in parallel. In SR, safety of concurrent execution is the programmer’s responsibility. Access to global variables from within p need to be synchronized.

Launch-At-Elaboration in Ada procedure P is task T is... end T; begin -- P... end P; If P is recursive, many instances of T are created, and run concurrently. When control reaches the end of P, it waits for the appropriate instance of T to complete, before returning.

Fork/Join in Modula-3 t := Fork(c);... Join(t); Fork: creates a new thread, and starts executing procedure c. Fork returns a reference to a “thread closure” (just an address, actually). Join(t) waits for c to complete.

Structured vs. Unstructured Threads (a) Co-begin, parallel loops, and launch- at-elaboration are always properly nested. (b) Fork/join might not. More general construct, arbitrary patterns of control flow.

Threads, Calls and Replies (a)Ordinary subroutine call. (b)Sub call is equivalent to busy-wait until subroutine returns. (c)Early reply: forked thread continues after “returning”. to caller: signal that parent thread can proceed. (d) Can postpone fork until reply is signaled.

Need for Early Reply Typically, the forked thread will be autonomous, but only after some initialization. Example: web page rendering. Fork thread to render an image. Before proceeding, need size information on the image. Fork new thread to begin rendering, wait until it “early-replies” with image size info, and then proceed.

Thread Implementation OS kernel manages processes and processors. Thread scheduler manages threads and processes.

Uniprocessor Scheduling One thread is current. Runnable threads are on ready_list. Other threads are waiting for various conditions. Scheduler moves threads to/from these lists.

Uniprocessor Scheduling (cont’d) reschedule : Get a runnable thread. Transfer to it. yield : place current_thread on ready_list. Reschedule. sleep_on : place current_thread on specified queue Q. Reschedule.

Uniprocessor Scheduling (cont’d) Preemption: when switching threads, schedulers typically ask hardware clock to deliver a “wakeup” call sometime in the future. This ensures fairness: the current thread can’t hog the processor for too long. Wakeup signals come at arbitrary times, causing race conditions. Customary for scheduler to disable signals until transition ( reschedule, sleep_on ) is complete.

Shared Memory Model Two common forms of synchronization: –Mutual exclusion: Only one thread can execute a critical region of the code at a time. –Condition Synchronization: Thread(s) wait until a certain condition holds. Don’t want to over-synchronize: goal is to prevent bad race conditions.

Busy-wait Synchronization Typically, the condition is of the form –“location X contains value Y”. Process in busy-wait state enters a loop, cycles until condition holds. Busy-wait mutual exclusion is harder: –Usually requires an atomic machine instruction (non-interruptible), such as test_and_set. Technique consumes CPU cycles while process waits.

Scheduler-Based Synchronization Three techniques: –Semaphores –Monitors –Conditional critical regions. Advantage: process doesn’t cycle. –Instead, it is removed from the ready list. –Is not returned to the ready list until its condition is true.

Semaphores Oldest technique, invented by Dijkstra. A counter with two operations: P(Semaphore s) { await s > 0 then s = s-1; /* decrement must be atomic */ } V(Semaphore s) { s = s+1; /* must be atomic */ }

Semaphores (cont’d) The value of the sempahore is the number of units of the resource that are available. P waits until a resource is available (S > 0) and immediately claims it. V simply makes a resource available.

Semaphores (cont’d) To avoid busy-waiting, a semaphore may have an associated queue of processes. –If a process performs a P operation on a semaphore which has the value zero, the process is added to the semaphore's queue. – When another process increments the semaphore by performing a V operation, if there are processes on the queue, one of them is removed from the queue and resumes execution.

Sample Use of Semaphores A thread A needs information from two databases before proceeding. Access to these two DBs is controlled by two separate threads, B and C. A does the following: –Initialises a semaphore S with init(S,-1). –Posts a DBDataRequest to both the threads(B,C), including a reference to the semaphore. –Immediately does P(S) and blocks.

Sample Use of Semaphores (cont’d) Threads B and C take their time to obtain the information. As B and C conclude, they each do a V(S). Only after both B and C have done their V(S) (and S becomes positive), will A be able to continue.

Monitors A module with operations, an internal state, and condition variables. Only one operation in a given monitor can be active at one time. A thread that calls a busy monitor is delayed until the monitor becomes free. An operation can wait on a condition variable. An operation can signal a condition variable, causing one of the waiting threads to resume.

Monitor for a Bounded Buffer Two operations: insert, remove. Two conditions: empty_slot, full_slot insert waits until there’s an empty slot, fills it, and signals full_slot. remove waits until there’s a full slot, removes it, and signals empty_slot.

Conditional Critical Regions A syntactically delimited section of code in which there is access to a protected variable. A Boolean variable is specified, without which control cannot enter the region. Only code in the region statement can access the protected variable. Any thread that reaches the region must: –Wait until the condition is true. –No other thread is in a region for the same protected variable.

Sample Conditional Critical Region Protected variable: buffer insert allowed only if full_slots < SIZE remove allowed only if full_slots > 0

Bounded Buffer in Ada Explicit manager task: buffer. Task contains buffer size, number of full slots, contents, and two operations: insert, remove

Bounded Buffer in Ada (cont’d) Ada uses message passing: the select statement accepts (if conditions hold) messages from threads requesting operations: buffer.insert(3); buffer.remove(x);

Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming