Presentation is loading. Please wait.

Presentation is loading. Please wait.

Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming.

Similar presentations


Presentation on theme: "Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming."— Presentation transcript:

1 Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming

2 A Little History Since the beginning, differences in device speed caused lots of processor idle time. First concurrent systems introduced interrupt- driven I/O: – system keeps track of which processes are waiting for I/O to complete. –OS hands off I/O task, and assigns processor to a runnable process. –When I/O task is complete, process sends interrupt signal to OS. OS schedules it for processor time.

3 Main problem: Synchronization If interrupt occurs while OS is updating a data structure, another process may see the data in an inconsistent state: a race condition.

4 A Concurrency Application (WWW) A modern-day web browser parses HTML commands, and creates many threads. User can access new items, edit bookmarks, etc. while browser renders portions of a page. Browser “forks” off processes to format and render frames.

5 Sample Web Browser Code

6 Communication and Synchronization Main issues in concurrency. Synchronization: control order of events. Communication: signaling information from one process to another. Two main communication models: –Message-passing model: threads communicate by explicitly sending messages. Action taken upon receipt. –Shared-memory model: threads communicate by reading/storing data in a common location.

7 Communication and Synchronization In either communication model, –synchronization performed by either: Busy-wait: run loop, keep checking until a certain condition holds. Blocking: relinquish processor, leave a note. Later, some other thread that changes the condition will request a “wakeup”.

8 Co-begin Construct in Algol 68 begin a := 3, comma: either order. b := 4 end par begin a := 3, ‘par’: can run in parallel b := 4 end

9 Parallel Loops in SR co (i:= 5 to 10) p (a, b, i) oc Six instances of p, can run in parallel. In SR, safety of concurrent execution is the programmer’s responsibility. Access to global variables from within p need to be synchronized.

10 Launch-At-Elaboration in Ada procedure P is task T is... end T; begin -- P... end P; If P is recursive, many instances of T are created, and run concurrently. When control reaches the end of P, it waits for the appropriate instance of T to complete, before returning.

11 Fork/Join in Modula-3 t := Fork(c);... Join(t); Fork: creates a new thread, and starts executing procedure c. Fork returns a reference to a “thread closure” (just an address, actually). Join(t) waits for c to complete.

12 Structured vs. Unstructured Threads (a) Co-begin, parallel loops, and launch- at-elaboration are always properly nested. (b) Fork/join might not. More general construct, arbitrary patterns of control flow.

13 Threads, Calls and Replies (a)Ordinary subroutine call. (b)Sub call is equivalent to busy-wait until subroutine returns. (c)Early reply: forked thread continues after “returning”. to caller: signal that parent thread can proceed. (d) Can postpone fork until reply is signaled.

14 Need for Early Reply Typically, the forked thread will be autonomous, but only after some initialization. Example: web page rendering. Fork thread to render an image. Before proceeding, need size information on the image. Fork new thread to begin rendering, wait until it “early-replies” with image size info, and then proceed.

15 Thread Implementation OS kernel manages processes and processors. Thread scheduler manages threads and processes.

16 Uniprocessor Scheduling One thread is current. Runnable threads are on ready_list. Other threads are waiting for various conditions. Scheduler moves threads to/from these lists.

17 Uniprocessor Scheduling (cont’d) reschedule : Get a runnable thread. Transfer to it. yield : place current_thread on ready_list. Reschedule. sleep_on : place current_thread on specified queue Q. Reschedule.

18 Uniprocessor Scheduling (cont’d) Preemption: when switching threads, schedulers typically ask hardware clock to deliver a “wakeup” call sometime in the future. This ensures fairness: the current thread can’t hog the processor for too long. Wakeup signals come at arbitrary times, causing race conditions. Customary for scheduler to disable signals until transition ( reschedule, sleep_on ) is complete.

19 Shared Memory Model Two common forms of synchronization: –Mutual exclusion: Only one thread can execute a critical region of the code at a time. –Condition Synchronization: Thread(s) wait until a certain condition holds. Don’t want to over-synchronize: goal is to prevent bad race conditions.

20 Busy-wait Synchronization Typically, the condition is of the form –“location X contains value Y”. Process in busy-wait state enters a loop, cycles until condition holds. Busy-wait mutual exclusion is harder: –Usually requires an atomic machine instruction (non-interruptible), such as test_and_set. Technique consumes CPU cycles while process waits.

21 Scheduler-Based Synchronization Three techniques: –Semaphores –Monitors –Conditional critical regions. Advantage: process doesn’t cycle. –Instead, it is removed from the ready list. –Is not returned to the ready list until its condition is true.

22 Semaphores Oldest technique, invented by Dijkstra. A counter with two operations: P(Semaphore s) { await s > 0 then s = s-1; /* decrement must be atomic */ } V(Semaphore s) { s = s+1; /* must be atomic */ }

23 Semaphores (cont’d) The value of the sempahore is the number of units of the resource that are available. P waits until a resource is available (S > 0) and immediately claims it. V simply makes a resource available.

24 Semaphores (cont’d) To avoid busy-waiting, a semaphore may have an associated queue of processes. –If a process performs a P operation on a semaphore which has the value zero, the process is added to the semaphore's queue. – When another process increments the semaphore by performing a V operation, if there are processes on the queue, one of them is removed from the queue and resumes execution.

25 Sample Use of Semaphores A thread A needs information from two databases before proceeding. Access to these two DBs is controlled by two separate threads, B and C. A does the following: –Initialises a semaphore S with init(S,-1). –Posts a DBDataRequest to both the threads(B,C), including a reference to the semaphore. –Immediately does P(S) and blocks.

26 Sample Use of Semaphores (cont’d) Threads B and C take their time to obtain the information. As B and C conclude, they each do a V(S). Only after both B and C have done their V(S) (and S becomes positive), will A be able to continue.

27 Monitors A module with operations, an internal state, and condition variables. Only one operation in a given monitor can be active at one time. A thread that calls a busy monitor is delayed until the monitor becomes free. An operation can wait on a condition variable. An operation can signal a condition variable, causing one of the waiting threads to resume.

28 Monitor for a Bounded Buffer Two operations: insert, remove. Two conditions: empty_slot, full_slot insert waits until there’s an empty slot, fills it, and signals full_slot. remove waits until there’s a full slot, removes it, and signals empty_slot.

29 Conditional Critical Regions A syntactically delimited section of code in which there is access to a protected variable. A Boolean variable is specified, without which control cannot enter the region. Only code in the region statement can access the protected variable. Any thread that reaches the region must: –Wait until the condition is true. –No other thread is in a region for the same protected variable.

30 Sample Conditional Critical Region Protected variable: buffer insert allowed only if full_slots < SIZE remove allowed only if full_slots > 0

31 Bounded Buffer in Ada Explicit manager task: buffer. Task contains buffer size, number of full slots, contents, and two operations: insert, remove

32 Bounded Buffer in Ada (cont’d) Ada uses message passing: the select statement accepts (if conditions hold) messages from threads requesting operations: buffer.insert(3); buffer.remove(x);

33 Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming


Download ppt "Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming."

Similar presentations


Ads by Google