Programming with Threads

Slides:



Advertisements
Similar presentations
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 10 SHARED MEMORY.
Advertisements

Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Programming with Threads Topics Threads Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races.
CONDITION VARIABLE. Announcements  Quiz  Getting the big picture  Programming assignments  Cooperation  Lecture is cut 20 mins short for Quiz and.
Programming with Threads November 26, 2003 Topics Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy.
Carnegie Mellon 1 Synchronization: Basics : Introduction to Computer Systems 23 rd Lecture, Nov. 16, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
– 1 – , F’02 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program.
Synchronization April 29, 2008 Topics Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races and.
Programming with Threads Topics Threads Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races.
Carnegie Mellon 1 Synchronization: Basics / : Introduction to Computer Systems 23 rd Lecture, Jul 21, 2015 Instructors: nwf and Greg Kesden.
Programming with Threads April 30, 2002 Topics Shared variables The need for synchronization Synchronizing with semaphores Synchronizing with mutex and.
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
1 Process Synchronization Andrew Case Slides adapted from Mohamed Zahran, Clark Barrett, Jinyang Li, Randy Bryant and Dave O’Hallaron.
SYNCHRONIZATION-2 Slides from: Computer Systems: A Programmer's Perspective, 2nd Edition by Randal E. Bryant and David R. O'Hallaron, Prentice Hall, 2011.
Week 16 (April 25 th ) Outline Thread Synchronization Lab 7: part 2 & 3 TA evaluation form Reminders Lab 7: due this Thursday Final review session Final.
Synchronizing Threads with Semaphores
SYNCHRONIZATION. Today  Threads review  Sharing  Mutual exclusion  Semaphores.
Carnegie Mellon Introduction to Computer Systems /18-243, fall nd Lecture, Nov. 17 Instructors: Roger B. Dannenberg and Greg Ganger.
Programming with Threads Topics Threads Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races.
Thread S04, Recitation, Section A Thread Memory Model Thread Interfaces (System Calls) Thread Safety (Pitfalls of Using Thread) Racing Semaphore.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
1 Carnegie Mellon Synchronization : Introduction to Computer Systems Recitation 14: November 25, 2013 Pratik Shah (pcshah) Section C.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Concurrency Threads Synchronization Thread-safety of Library Functions Anubhav Gupta Dec. 2, 2002 Outline.
Web Server Architecture Client Main Thread for(j=0;j
1 Introduction to Threads Race Conditions. 2 Process Address Space Revisited Code Data OS Stack (a)Process with Single Thread (b) Process with Two Threads.
1 CMSC Laundry List P/W/F, Exam, etc. Instructors: HSG, HH, MW.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
CS703 - Advanced Operating Systems
Synchronization /18-243: Introduction to Computer Systems
Instructors: Gregory Kesden and Markus Püschel
Threads Synchronization Thread-safety of Library Functions
CS703 – Advanced Operating Systems
Synchronization: Basics
Background on the need for Synchronization
Instructors: Randal E. Bryant and David R. O’Hallaron
Programming with Threads
Programming with Threads
Threads Threads.
Atomic Operations in Hardware
Atomic Operations in Hardware
Chapter 5: Process Synchronization
Instructor: Randy Bryant
Instructor: Randy Bryant
Programming with Threads
CS703 – Advanced Operating Systems
Concurrent Programming
Programming with Threads Dec 6, 2001
Programming with Threads Dec 5, 2002
Threading And Parallel Programming Constructs
CS703 - Advanced Operating Systems
Concurrent Programming November 18, 2008
Synchronization: Basics CSCI 380: Operating Systems
Programming with Threads
Programming with Threads
CSE 153 Design of Operating Systems Winter 19
Synchronization: Advanced CSCI 380: Operating Systems
Chapter 6: Synchronization Tools
Foundations and Definitions
Programming with Threads
Lecture 20: Synchronization
Instructor: Brian Railing
Lecture 19: Threads CS April 3, 2019.
Programming with Threads
Threads CSE 2431: Introduction to Operating Systems
Presentation transcript:

Programming with Threads Topics Shared variables The need for synchronization Synchronizing with semaphores Thread safety and reentrancy Races and deadlocks

Shared Variables in Threaded C Programs Question: Which variables in a threaded C program are shared variables? The answer is not as simple as “global variables are shared” and “stack variables are private”. Requires answers to the following questions: What is the memory model for threads? How are variables mapped to memory instances? How many threads reference each of these instances?

Threads Memory Model Conceptual model: Each thread runs in the context of a process. Each thread has its own separate thread context. Thread ID, stack, stack pointer, program counter, condition codes, and general purpose registers. All threads share the remaining process context. Code, data, heap, and shared library code Open files and installed handlers Operationally, this model is not strictly enforced: While register values are truly separate and protected.... Any thread can read and write the stack of any other thread.

Practice Problem Problem 13.6, p. 871 (handout)

Shared Variable Analysis Which variables are shared? Variable Referenced by Referenced by Referenced by instance main thread? peer thread 0? peer thread 1? ptr yes yes yes cnt no yes yes i.m yes no no msgs.m yes yes yes myid.p0 no yes no myid.p1 no no yes ptr, cnt, and msgs are shared. i and myid are not shared (i is passed by value) demo sharing.c (on cs01)

Synchronization An improperly synchronized program badcnt.c (p. 872)

Synchronization An improperly synchronized program badcnt.c (p. 872) examine count function in badcnt.s

Synchronization An improperly synchronized program badcnt.c (p. 872) examine count function in badcnt.s cnt++; movl _cnt,%eax incl %eax movl %eax,_cnt

Synchronization An improperly synchronized program badcnt.c (p. 872) examine count function in badcnt.s compile without the force-mem option that forces memory operands to be copied to registers before performing arithmetic cnt++; movl _cnt,%eax incl %eax movl %eax,_cnt

Synchronization An improperly synchronized program badcnt.c (p. 872) examine count function in badcnt.s compile without the force-mem option that forces memory operands to be copied to registers before performing arithmetic cnt++; movl _cnt,%eax incl %eax movl %eax,_cnt cnt++; incl _cnt

Assembly Code for Counter Loop C code for counter loop Corresponding asm code (gcc -fforce-mem) for (i=0; i<NITERS; i++) cnt++; .L9: movl -4(%ebp),%eax cmpl $99999999,%eax jle .L12 jmp .L10 .L12: movl _cnt,%eax # Load incl %eax # Update movl %eax,_cnt # Store .L11: leal 1(%eax),%edx movl %edx,-4(%ebp) jmp .L9 .L10: Head (Hi) Load cnt (Li) Update cnt (Ui) Store cnt (Si) Tail (Ti)

Concurrent Execution Any sequential interleaving is possible, but some orderings are incorrect %eaxi is the contents of %eax in thread i’s context H=Head T=Tail L=Load U=Update S=Store i (thread) instri %eax1 %eax2 cnt 1 H1 - - 1 L1 - 1 U1 1 - 1 S1 1 - 1 2 H2 - - 1 2 L2 - 1 1 2 U2 - 2 1 2 S2 - 2 2 2 T2 - 2 2 1 T1 1 - 2 OK

Concurrent Execution Incorrect ordering: two threads increment the counter, but the result is 1 instead of 2. i (thread) instri %eax1 %eax2 cnt 1 H1 - - 1 L1 - 1 U1 1 - 2 H2 - - 2 L2 - 1 S1 1 - 1 1 T1 1 - 1 2 U2 - 1 1 2 S2 - 1 1 2 T2 - 1 1 Oops!

Concurrent Execution How about this ordering? H1 L1 H2 L2 U2 S2 U1 S1 i (thread) instri cnt %eax1 %eax2

Concurrent Execution How about this ordering? L1 H2 L2 U2 S2 U1 S1 T1 T2 1 2 - i (thread) instri cnt %eax1 %eax2 We can clarify our understanding of concurrent execution with the help of the progress graph

Progress Graphs A progress graph depicts the execution state Thread 2 A progress graph depicts the execution state of concurrent threads. Each axis corresponds to the sequential order of instructions in a thread. Each point corresponds to a possible execution state. E.g., (L1, S2) denotes state where thread 1 has completed L1 and thread 2 has completed S2. T2 (L1, S2) S2 U2 L2 H2 Thread 1 H1 L1 U1 S1 T1

Trajectories in Progress Graphs L1 U1 S1 T1 H2 L2 U2 S2 T2 Thread 1 Thread 2 A trajectory is a sequence of legal state transitions that describes one possible concurrent execution of the threads. Example: H1, L1, U1, H2, L2, S1, T1, U2, S2, T2

Critical Sections and Unsafe Regions Thread 2 L, U, and S form a critical section with respect to the shared variable cnt. Instructions in critical sections should not be interleaved. Sets of states where such interleaving occurs form unsafe regions. T2 S2 critical section U2 Unsafe region L2 H2 Thread 1 H1 L1 U1 S1 T1 critical section

Safe and Unsafe Trajectories Thread 2 Def: A trajectory is safe iff it doesn’t cross into the unsafe region. Claim: A trajectory is correct iff it is safe. Safe trajectory T2 S2 Unsafe region Unsafe trajectory critical section U2 L2 H2 Thread 1 H1 L1 U1 S1 T1 critical section

Semaphores Question: How can we guarantee a safe trajectory? We must synchronize the threads so that they never enter an unsafe state Classic solution: Dijkstra's P and V operations on semaphores. semaphore: non-negative integer synchronization variable. Variable “s” indicates the number of resources available P(s): [ while (s == 0) wait(); s--; ] P(s) requests resource “s” Dutch for "Proberen" (test) V(s): [ s++; ] V(s) releases resource “s” Dutch for "Verhogen" (increment) OS guarantees that operations between brackets [ ] are atomic operations (executed indivisibly). Only one P or V operation at a time can modify ‘s’ When while loop in P terminates, only that P can decrement ‘s’

Safe Sharing with Semaphores Here is how we would use P and V operations to synchronize the threads that update cnt. /* Semaphore s is initially 1 */ /* Thread routine */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) { P(s); cnt++; V(s); } return NULL;

Safe Sharing With Semaphores Thread 2 1 Provide mutually exclusive access to shared variable by surrounding critical section with P and V operations on semaphore s (initially set to 1). Semaphore invariant creates a forbidden region that encloses unsafe region and is never touched by any trajectory. T2 1 V(s) Forbidden region -1 -1 -1 -1 S2 -1 -1 -1 -1 U2 Unsafe region -1 -1 -1 -1 L2 -1 -1 -1 -1 P(s) 1 H2 1 H1 P(s) L1 U1 S1 V(s) T1 Initially s = 1 Thread 1

Semaphore Wrappers /* Initialize semaphore sem to value */ /* pshared=0 if thread, pshared=1 if process */ void Sem_init(sem_t *sem, int pshared, unsigned int value) { if (sem_init(sem, pshared, value) < 0) unix_error("Sem_init"); } /* P operation on semaphore sem */ void P(sem_t *sem) { if (sem_wait(sem)) unix_error("P"); /* V operation on semaphore sem */ void V(sem_t *sem) { if (sem_post(sem)) unix_error("V");

Demo: goodcnt.c (demo, cs01, not in text)

Signaling With Semaphores producer thread consumer thread shared buffer Common synchronization pattern: Producer waits for slot, inserts item in buffer, and “signals” consumer. Consumer waits for item, removes it from buffer, and “signals” producer. “signals” in this context have nothing to do with Unix signals Examples Multimedia processing: Producer creates MPEG video frames, consumer renders the frames Event-driven graphical user interfaces Producer (person) creates mouse clicks, mouse movements, and keyboard hits and inserts corresponding events in buffer. Consumer (computer) retrieves events from buffer and paints the display.

Examples Demo: Shared FIFO Buffer Demo: Concurrent Echo Server sbuf.h/sbuf.c - shared FIFO buffer (p. 880-882) sbufdriver.c – test driver Demo: Concurrent Echo Server echoservert_pre.c (p. 884) echo_cnt (p. 885)

Thread-Safe and Reentrant Functions Functions called from a thread must be thread-safe. A function is reentrant iff it Accesses NO shared variables Does not require synchronization operations (initialization) A function can be thread-safe, but not reentrant A thread-safe application can have shared variables But uses semaphores to access shared variables Less efficient than reentrant implementation Reentrant functions All functions Thread-unsafe Thread-safe

Thread Safety Functions that are thread-unsafe include functions that fail to protect shared variables rely on persistent state across invocations return a pointer to a static variable call thread-unsafe functions

Thread-Unsafe Functions Failing to protect shared variables. Fix: Use P and V semaphore operations. Issue: Synchronization operations will slow down code. Example: goodcnt.c

Thread-Unsafe Functions Relying on persistent state across multiple function invocations. Random number generator relies on static state Fix: Rewrite function so that caller passes in all necessary state. // rand - return pseudo-random integer on 0..32767 static unsigned int next = 1; int rand(void) { next = (next * 1103515245) + 12345; return (unsigned int)(next/65536) % 32768; } // srand - set seed for rand() void srand(unsigned int seed) { next = seed; srand(20); int x = rand();

Thread-Unsafe Functions Relying on persistent state across multiple function invocations. Random number generator relies on static state Fix: Rewrite function so that caller passes in the necessary state. // rand - return pseudo-random integer on 0..32767 int rand(unsigned int *next) { *next = (*next * 1103515245) + 12345; return (unsigned int)(*next/65536) % 32768; } int seed = 20; int x = rand(&seed);

Thread-Unsafe Functions Returning a ptr to a static variable. Fixes: Reentrant version, caller passes pointer to buf. Issue: Requires changes in caller and callee. Thread-safe version, Lock-and-copy wrapper Issue: caller must free memory. char *asctime(struct tm *CLOCK) { static char buf[50]; ... return buf; } // see ‘man asctime’ buf = Malloc(...); asctime_r(&tm, buf); char *asctime_t(struct tm *CLOCK) { char *buf = Malloc(...); P(&mutex); char *p = asctime(tm); strcpy(buf, p); V(&mutex); return buf; }

Thread-Unsafe Functions Calling thread-unsafe functions. Calling one thread-unsafe function makes an entire function thread-unsafe. Fix: Modify the function so it calls only thread-safe functions

Thread-Safe Library Functions Most functions in the Standard C Library (at the back of your K&R text) are thread-safe. Examples: malloc, free, printf, scanf Exceptions: asctime, ctime, gmtime, localtime These four functions return pointers to static objects that may be overwritten by other calls. Exceptions: rand Reentrant versions: asctime_r, rand_r Most Unix system calls are thread-safe, with a few exceptions: Thread-unsafe function Reentrant version gethostbyaddr gethostbyaddr_r gethostbyname gethostbyname_r inet_ntoa (none)

Races A race occurs when the correctness of the program depends on one thread reaching point ‘x’ before another thread reaches point ‘y’. (demo, run several times) // race between creating next thread and // accessing variable “i” int main() { pthread_t tid[N]; int i; for (i = 0; i < N; i++) Pthread_create(&tid[i], NULL, thread, &i); Pthread_join(tid[i], NULL); exit(0); } // thread routine void *thread(void *vargp) { int myid = *((int *)vargp); printf("Hello from thread %d\n", myid); return NULL;

Deadlocks Locking introduces the potential for deadlock: waiting for a condition that will never be true. Any trajectory that enters the deadlock region will eventually reach the deadlock state, waiting for either s or t to become nonzero. Other trajectories luck out and skirt the deadlock region. Thread 2 V(a) deadlock state forbidden region for a V(b) P(a) deadlock region forbidden region for b P(b) P(a) P(b) V(a) V(b) Thread 1 Initially, a = b= 1

Threads Summary Threads provide another mechanism for writing concurrent programs. Threads are growing in popularity Somewhat cheaper than processes Easy to share data between threads However, the ease of sharing has a cost: Easy to introduce subtle synchronization errors Tread carefully with threads! For more info: D. Butenhof, “Programming with Posix Threads”, Addison-Wesley, 1997.