1 Process Synchronization Andrew Case Slides adapted from Mohamed Zahran, Clark Barrett, Jinyang Li, Randy Bryant and Dave O’Hallaron.

Slides:



Advertisements
Similar presentations
Chapter 6: Process Synchronization
Advertisements

5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Programming with Threads Topics Threads Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races.
Programming with Threads November 26, 2003 Topics Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy.
Carnegie Mellon 1 Synchronization: Basics : Introduction to Computer Systems 23 rd Lecture, Nov. 16, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
– 1 – , F’02 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Synchronization April 29, 2008 Topics Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races and.
Programming with Threads Topics Threads Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races.
Carnegie Mellon 1 Synchronization: Basics / : Introduction to Computer Systems 23 rd Lecture, Jul 21, 2015 Instructors: nwf and Greg Kesden.
Programming with Threads April 30, 2002 Topics Shared variables The need for synchronization Synchronizing with semaphores Synchronizing with mutex and.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
1 Concurrent Programming. 2 Outline Semaphores for Shared Resources –Producer-consumer problem –Readers-writers problem Example –Concurrent server based.
1. Synchronization: basic 2. Synchronization: advanced.
1 Using Semaphores CS 241 March 14, 2012 University of Illinois Slides adapted in part from material accompanying Bryant & O’Hallaron, “Computer Systems:
1 Concurrent Programming. 2 Outline Concurrent programming –Processes –Threads Suggested reading: –12.1, 12.3.
Week 16 (April 25 th ) Outline Thread Synchronization Lab 7: part 2 & 3 TA evaluation form Reminders Lab 7: due this Thursday Final review session Final.
Threads Chapter 26. Threads Light-weight processes Each process can have multiple threads of concurrent control. What’s wrong with processes? fork() is.
Synchronizing Threads with Semaphores
SYNCHRONIZATION. Today  Threads review  Sharing  Mutual exclusion  Semaphores.
Carnegie Mellon Introduction to Computer Systems /18-243, fall nd Lecture, Nov. 17 Instructors: Roger B. Dannenberg and Greg Ganger.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
CIS Operating Systems Synchronization Professor Qiang Zeng Fall 2015.
Programming with Threads Topics Threads Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races.
Thread S04, Recitation, Section A Thread Memory Model Thread Interfaces (System Calls) Thread Safety (Pitfalls of Using Thread) Racing Semaphore.
Concurrency I: Threads Nov 9, 2000 Topics Thread concept Posix threads (Pthreads) interface Linux Pthreads implementation Concurrent execution Sharing.
1 Carnegie Mellon Synchronization : Introduction to Computer Systems Recitation 14: November 25, 2013 Pratik Shah (pcshah) Section C.
1. Concurrency and threads 2. Synchronization: basic 3. Synchronization: advanced.
Concurrency Threads Synchronization Thread-safety of Library Functions Anubhav Gupta Dec. 2, 2002 Outline.
1 Introduction to Threads Race Conditions. 2 Process Address Space Revisited Code Data OS Stack (a)Process with Single Thread (b) Process with Two Threads.
1 CMSC Laundry List P/W/F, Exam, etc. Instructors: HSG, HH, MW.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
CS703 - Advanced Operating Systems
Synchronization /18-243: Introduction to Computer Systems
Instructors: Gregory Kesden and Markus Püschel
Threads Synchronization Thread-safety of Library Functions
Synchronization: Basics
Instructors: Randal E. Bryant and David R. O’Hallaron
Programming with Threads
Programming with Threads
Concurrent Programming November 13, 2008
Instructor: Randy Bryant
Concurrency I: Threads April 10, 2001
Instructor: Randy Bryant
Recitation 14: Proxy Lab Part 2
Programming with Threads
CS703 – Advanced Operating Systems
Concurrent Programming
Programming with Threads Dec 6, 2001
Programming with Threads Dec 5, 2002
Concurrent Programming November 18, 2008
Synchronization: Basics CSCI 380: Operating Systems
Programming with Threads
Programming with Threads
Synchronization: Advanced CSCI 380: Operating Systems
Concurrent Programming November 13, 2008
Programming with Threads
Lecture 20: Synchronization
Instructor: Brian Railing
Lecture 19: Threads CS April 3, 2019.
Programming with Threads
Threads CSE 2431: Introduction to Operating Systems
Presentation transcript:

1 Process Synchronization Andrew Case Slides adapted from Mohamed Zahran, Clark Barrett, Jinyang Li, Randy Bryant and Dave O’Hallaron

2 Topics Sharing data Race conditions Mutual exclusion and semaphores Sample problems Deadlocks

3 Multi-Threaded process A process can have multiple threads  Each thread has its own logical control flow (PC, registers, etc.)  Each thread shares the same code, data, and kernel context shared libraries run-time heap 0 read/write data Thread 1 context: Data registers Condition codes SP1 PC1 Shared code and data read-only code/data stack 1 Thread 1 (main thread) Kernel context: VM structures Descriptor table brk pointer Thread 2 context: Data registers Condition codes SP2 PC2 stack 2 Thread 2 (peer thread)

4 Threads Memory Model Conceptual model:  Multiple threads run within the context of a single process  Each thread has its own separate thread context  Thread ID, stack, stack pointer, PC, condition codes, and GP registers  All threads share the remaining process context  Code, data, heap, and shared library segments of the process virtual address space  Open files and installed handlers Operationally, this model is not strictly enforced:  Register values are protected, but…  Any thread can read and write to entire virtual address space (any threads stack)

5 Mapping Variable Instances to Memory Global variables  Def: Variable declared outside of a function  Virtual memory contains exactly one instance of any global variable Local variables  Def: Variable declared inside function without static attribute  Each thread stack contains one instance of each local variable Local static variables  Def: Variable declared inside function with the static attribute  Virtual memory contains exactly one instance of any local static variable.

6 Mapping Variable Instances to Memory char **ptr; /* global */ int main() { int i; pthread_t tid; char *msgs[2] = { "Hello from foo", "Hello from bar" }; ptr = msgs; for (i = 0; i < 2; i++) pthread_create(&tid, NULL, thread, (void *)i); Pthread_exit(NULL); } /* thread routine */ void *thread(void *vargp) { int myid = (int)vargp; static int cnt = 0; printf("[%d]: %s (svar=%d)\n", myid, ptr[myid], ++cnt); } Global var: 1 instance in ‘data’ memory Local static var: 1 instance in ‘data’ memory Local vars: 1 instance in main thread stack Local var: 2 instances ( myid in peer thread 0’s stack, myid in peer thread 1’s stack ) linux>./a.out [0]: Hello from foo (cnt=1) [1]: Hello from foo (cnt=2)

7 Problem: Race conditions A race condition occurs when correctness of the program depends on one thread or process reaching point x before another thread or process reaches point y Any ordering of instructions from multiple threads may occur race.c

8 Race Condition: Example int main() { … int cnt; for (cnt = 0; cnt < 100; cnt++) { /* send an cnt to each thread */ pthread_create(&tid, NULL, thread, &cnt); } void *thread(void *p) { int i = *((int *)p); /* cast to an int ptr and de-ref */ pthread_detach(pthread_self()); save_value(i); /* do something with the count */ return NULL; } Race Test  If no race, then each thread would get different value of cnt  Set of saved values would consist of one copy each of 0 through 99. How can we fix it?

9 Race Condition: Example Line of code run cnt i (in thread) Worker thread1 Main thread … 1:int cnt; 2:for (cnt = 0; cnt < 100; cnt++) { 3: pthread_create(&tid, NULL, thread, &cnt); 4:} … 5:void *thread(void *p) { 6: int i = *((int *)p); 7: pthread_detach(pthread_self()); 8: save_value(i); 9: return NULL; 10:} Problem: i in worker thread should have been set to

10 Experimental Results The race can really happen! No Race Multicore server Single core laptop (Number of times each value is saved)

11 Race Condition: Fixed! int main() { … int cnt; for (cnt = 0; cnt < 100; cnt++) { int *j = malloc(sizeof(int)); /* create separate var */ *j = cnt; /* for each thread */ pthread_create(&tid, NULL, thread, j); } void *thread(void *p) { int i = *((int *)p); free(p); /* free it when done */ pthread_detach(pthread_self()); save_value(i); return NULL; }

12 Race Condition: Problem Line of code Cnt (global) i (worker) Worker thread1 Main thread … 1: int cnt; 2: for (cnt = 0; cnt < 100; cnt++) { 3: int *j = malloc(sizeof(int)); 4: *j = cnt; 5: pthread_create(&tid, NULL, thread, j); 6: } … 7: void *thread(void *p) { 8: int i = *((int *)p); 9: free(p); 10: pthread_detach(pthread_self()); 11: save_value(i); 12: return NULL; 13: } *j ? - 6 *p

13 Topics Sharing data Race conditions Mutual exclusion and Semaphores Sample problems Deadlocks

14 Improper Synchronization: badcnt.c volatile int cnt = 0; /* global */ int main(int argc, char **argv) { int niters = atoi(argv[1]); pthread_t tid1, tid2; pthread_create(&tid1, NULL, thread, &niters); pthread_create(&tid2, NULL, thread, &niters); pthread_join(tid1, NULL); pthread_join(tid2, NULL); /* Check result */ if (cnt != (2 * niters)) printf("BOOM! cnt=%d\n”, cnt); else printf("OK cnt=%d\n", cnt); exit(0); } /* Thread routine */ void *thread(void *vargp) { /* local data */ int i; int niters = *((int *)vargp); for (i = 0; i < niters; i++) cnt++; /* shared data */ return NULL; } linux>./badcnt OK cnt=20000 linux>./badcnt BOOM! cnt=13051 linux> cnt should equal 20,000. What went wrong?

15 Assembly Code for Counter Loop movl (%rdi),%ecx movl $0,%edx cmpl %ecx,%edx jge.L13.L11: movl cnt(%rip),%eax incl %eax movl %eax,cnt(%rip) incl %edx cmpl %ecx,%edx jl.L11.L13: Corresponding assembly code for (i=0; i < niters; i++) cnt++; /* changing shared data */ C code for counter loop in thread i Head (loop iteration) Tail (loop iteration) Load cn t Update cn t Store cnt Critical Regions: changing shared data

16 Concurrent Execution Any sequentially consistent interleaving is possible  X i denotes that thread i executes instruction X  %eax i is the content of %eax in thread i’s context H1H1 L1L1 U1U1 S1S1 H2H2 L2L2 U2U2 S2S2 T2T2 T1T i (thread) instr i cnt%eax 1 OK %eax 2 Thread 1 critical section Thread 2 critical section H – head L – Load U – Update S – Store T – tail

17 Concurrent Execution (cont) Race condition: two threads increment the counter, but the result is 1 instead of 2 H1H1 L1L1 U1U1 H2H2 L2L2 S1S1 T1T1 U2U2 S2S2 T2T i (thread) instr i cnt%eax %eax 2 Oops! Thread 1 critical section Thread 2 critical section H – head L – Load U – Update S – Store T – tail

18 Concurrent Execution (cont) How about this ordering? How do we fix it?! H1H1 L1L1 H2H2 L2L2 U2U2 S2S2 U1U1 S1S1 T1T1 T2T i (thread) instr i cnt%eax 1 %eax Oops! Thread 1 critical section Thread 2 critical section H – head L – Load U – Update S – Store T – tail

19 Enforcing Mutual Exclusion Answer: We must synchronize the execution of the threads so that critical regions are run sequentially.  i.e., need to guarantee mutually exclusive access to critical regions Classic solution:  Semaphores (Dijkstra) Other approaches (out of our scope)  Pthread Mutexes and condition variables  Monitors (Java)

20 Semaphores Semaphore: non-negative global integer synchronization variable Manipulated by P and V operations:  P(s): [ while (s == 0) wait(); s--; ]  Dutch for "Proberen" (test)  Used to gain access to a shared resource  AKA: down, wait  V(s): [ s++; ]  Dutch for "Verhogen" (increment)  Used to release access of a shared resource  AKA: up, post […] – indicates an ‘atomic’ operation performed by the OS  Guarantees these are run as if it was one instructions  Only one P or V operation at a time can modify s.

21 Using Semaphores for Mutual Exclusion Basic idea:  Associate a unique semaphore mutex with each shared resource (e.g. a variable)  Surround corresponding critical sections with P(mutex)/wait() and V(mutex)/post() operations. Terminology:  Binary semaphore: semaphore whose value is always 0 or 1  Mutex: binary semaphore used for mutual exclusion  P()/wait() operation: “locking” the mutex  V()/post() operation: “unlocking” or “releasing” the mutex  “Holding” a mutex/resource: locked and not yet unlocked.

22 C Semaphore Operations Pthreads functions: #include /* creating a semaphore */ int sem_init(sem_t *sem, 0, unsigned int val);} /* s = val */ /* using a semaphore */ int sem_wait(sem_t *sem); /* P(s): try to claim a resource */ int sem_post(sem_t *sem); /* V(s): release a resource */ shared between threads (as opposed to processes) Number of resources available (how many threads can access the data at any given time)

23 Improper Synchronization: badcnt.c volatile int cnt = 0; /* global */ int main(int argc, char **argv) { int niters = atoi(argv[1]); pthread_t tid1, tid2; pthread_create(&tid1, NULL, thread, &niters); pthread_create(&tid2, NULL, thread, &niters); pthread_join(tid1, NULL); pthread_join(tid2, NULL); /* Check result */ if (cnt != (2 * niters)) printf("BOOM! cnt=%d\n”, cnt); else printf("OK cnt=%d\n", cnt); exit(0); } /* Thread routine */ void *thread(void *vargp) { /* local data */ int i, niters = *((int *)vargp); for (i = 0; i < niters; i++) cnt++; /* shared data */ return NULL; } How can we fix this using semaphores?

24 Proper Synchronization: goodcnt.c Define and initialize a mutex for the shared variable cnt: volatile int cnt = 0; /* Counter */ sem_t mutex; /* Semaphore that protects cnt */ sem_init(&mutex, 0, 1); /* mutex=1 (only 1 access at a time) */ Surround critical section with P and V: for (i = 0; i < niters; i++) { sem_wait(&mutex); /* P() */ cnt++; sem_post(&mutex); /* V() */ } linux>./goodcnt OK cnt=20000 linux>./goodcnt OK cnt=20000 linux> Note: much slower than badcnt.c

25 Sample Problems Readers-writers problem Producer-consumer problem

26 Readers-Writers Problem Problem statement:  Reader threads only read the object  Writer threads modify the object  Writers must have exclusive access to the object  Unlimited number of readers can access the object Examples:  Online airline reservation system  Multithreaded caching Web proxy  Banking software

27 Variants of Readers-Writers First readers-writers problem (favors readers)  No reader should be kept waiting unless a writer has already been granted permission to use the object.  A reader that arrives after a waiting writer gets priority over the writer. Second readers-writers problem (favors writers)  Once a writer is ready to write, it performs its write as soon as possible  A reader that arrives after a writer must wait, even if the writer is also waiting. Starvation (where a thread waits indefinitely) is possible in both cases.

28 Solution to First Readers-Writers Problem int readcnt; /* Initially 0 */ sem_t mutex, w; /* Both set to 1 */ void reader(void) { while (1) { /* am I the first reader in? */ /* if so, try to lock mutex */ /* to exclude writer */ /* do some reading */ /* am I the last reader out? */ /* if so, unlock mutex */ } void writer(void) { while (1) { /* try to lock mutex */ /* to exclude readers */ /* do some writing */ /* unlock mutex */ } Readers:Writers:

29 Solution to First Readers-Writers Problem int readcnt; /* Initially 0 */ sem_t mutex, w; /* Both initially 1 */ void reader(void) { while (1) { sem_wait(&mutex); readcnt++; if (readcnt == 1) /* First in */ sem_wait(&w); sem_post(&mutex); /* Reading happens here */ sem_wait(&mutex); readcnt--; if (readcnt == 0) /* Last out */ sem_post(&w); sem_post(&mutex); } void writer(void) { while (1) { sem_wait(&w); /* Writing here */ sem_post(&w); } Readers:Writers: rw1.c

30 Sample Problems Readers-writers problem Producer-consumer problem

31 Producer-Consumer Problem Common synchronization pattern:  Producer waits for empty slot, inserts item in buffer, and notifies consumer  Consumer waits for item, removes it from buffer, and notifies producer Examples  Multimedia processing:  Producer creates MPEG video frames, consumer renders them  Event-driven graphical user interfaces  Producer detects mouse clicks, mouse movements, and keyboard hits and inserts corresponding events in buffer  Consumer retrieves events from buffer and paints the display producer thread shared buffer consumer thread

32 Producer-Consumer on 1-element Buffer #include “csapp.h” #define NITERS 5 void *producer(void *arg); void *consumer(void *arg); struct { int buf; /* shared var */ sem_t full; /* sems */ sem_t empty; } shared; int main() { pthread_t tid_producer; pthread_t tid_consumer; /* Initialize the semaphores */ sem_init(&shared.empty, 0, 1); sem_init(&shared.full, 0, 0); /* Create threads and wait */ pthread_create(&tid_producer, NULL, producer, NULL); pthread_create(&tid_consumer, NULL, consumer, NULL); pthread_join(tid_producer, NULL); pthread_join(tid_consumer, NULL); exit(0); }

33 Producer-Consumer on 1-element Buffer void *producer(void *arg) { int i, item; for (i = 0; i < NITERS; i++) { /* Produce item */ item = i; printf("produced %d\n”, item); /* Write item to buf */ sem_wait(&shared.empty); shared.buf = item; sem_post(&shared.full); } return NULL; } void *consumer(void *arg) { int i, item; for (i = 0; i < NITERS; i++) { /* Read item from buf */ sem_wait(&shared.full); item = shared.buf; sem_post(&shared.empty); /* Consume item */ printf("consumed %d\n“, item); } return NULL; } Initially: empty==1, full==0 Producer ThreadConsumer Thread

34 Producer-Consumer on an n-element Buffer Requires a mutex and two counting semaphores:  mutex : enforces mutually exclusive access to the the buffer  slots : counts the available slots in the buffer  items : counts the available items in the buffer Implemented using a shared buffer

35 Producer-Consumer w/ Shared Buffer Shared Buffer Producer threads Producer threads Producer threads Producer threads Producer threads Producer threads Producer-consumer pattern:  Producer inserts item in buffer (waits if buffer is full)  Consumer removes item from buffer (waits if buffer is empty) Examples:  Network server  Producer threads read from sockets and put client request on buffer  Consumer threads process clients’ requests from buffer

36 Topics Sharing data Race conditions Mutual exclusion and semaphores Sample problems Deadlocks

37 Problem: Deadlocks Def: A process is deadlocked if it is waiting for a condition that will never be occur. Typical Scenario  Processes 1 and Process 2 both need the same two resources (A and B)  Process 1 acquires A, waits for B  Process 2 acquires B, waits for A  Both will wait forever!

38 Deadlocking With Semaphores int main() { pthread_t tid[2]; sem_init(&mutex[0], 0, 1); /* mutex[0] = 1 */ sem_init(&mutex[1], 0, 1); /* mutex[1] = 1 */ pthread_create(&tid[0], NULL, worker, (void*) 0); /* Lock 0-1 */ pthread_create(&tid[1], NULL, worker, (void*) 1); /* Lock 1-0 */ pthread_join(tid[0], NULL); pthread_join(tid[1], NULL); exit(0); } void *worker(void *x) { int id = (int) x; sem_wait(&mutex[id]); sem_wait(&mutex[1-id]); // Do something… sem_post(&mutex[id]); sem_post(&mutex[1-id]); return NULL; } Tid[0]: P(s 0 ); P(s 1 ); … V(s 0 ); V(s 1 ); Tid[1]: P(s 1 ); P(s 0 ); … V(s 1 ); V(s 0 );

39 Avoiding Deadlock with Semaphores int main() { pthread_t tid[2]; sem_init(&mutex[0], 0, 1); /* mutex[0] = 1 */ sem_init(&mutex[1], 0, 1); /* mutex[1] = 1 */ pthread_create(&tid[0], NULL, worker, (void*) 0); /* Lock 0-1 */ pthread_create(&tid[1], NULL, worker, (void*) 0); /* Lock 0-1 */ pthread_join(tid[0], NULL); pthread_join(tid[1], NULL); exit(0); } void *worker(void *x) { int id = (int) x; sem_wait(&mutex[id]); sem_wait(&mutex[1-id]); // Do something… sem_post(&mutex[id]); sem_post(&mutex[1-id]); return NULL; } Tid[0]: P(s 0 ); P(s 1 ); … V(s 1 ); V(s 0 ); Tid[1]: P(s 0 ); P(s 1 ); … V(s 1 ); V(s 0 ); Acquire shared resources in same order Release resources in same reverse order

40 Threads Summary Threads are were growing in popularity Pros:  Somewhat cheaper than processes  Easy to share data between threads Cons:  Easy to share data between threads  Easy to introduce subtle synchronization errors Shares variables must be protected to ensure mutually exclusive access  Semaphores provide one solution