CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
1 OMSE 510: Computing Foundations 6: Multithreading Chris Gilmore Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 CS318 Project #3 Preemptive Kernel. 2 Continuing from Project 2 Project 2 involved: Context Switch Stack Manipulation Saving State Moving between threads,
1 CS 333 Introduction to Operating Systems Class 5 – Classical IPC Problems Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
1 CS 333 Introduction to Operating Systems Class 4 – Synchronization Primitives Semaphores Jonathan Walpole Computer Science Portland State University.
CS533 Concepts of Operating Systems Class 3 Monitors.
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Jonathan Walpole Computer Science Portland State University
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
Synchronization CSCI 444/544 Operating Systems Fall 2008.
1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Discussion Week 3 TA: Kyle Dewey. Overview Concurrency overview Synchronization primitives Semaphores Locks Conditions Project #1.
CS510 Concurrent Systems Introduction to Concurrency.
CS533 Concepts of Operating Systems Jonathan Walpole.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
CS533 Concepts of Operating Systems Jonathan Walpole.
CS533 Concepts of Operating Systems Class 2 Overview of Threads and Concurrency.
© 2004, D. J. Foreman 1 Monitors and Inter-Process Communication.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
CS703 – Advanced Operating Systems
Synchronization.
CS533 Concepts of Operating Systems Class 3
Jonathan Walpole Computer Science Portland State University
CS510 Operating System Foundations
Chapter 5: Process Synchronization
CS510 Operating System Foundations
Jonathan Walpole Computer Science Portland State University
CS510 Operating System Foundations
Jonathan Walpole Computer Science Portland State University
CS399 New Beginnings Jonathan Walpole.
Implementing Mutual Exclusion
CS533 Concepts of Operating Systems Class 3
CS510 Operating System Foundations
Kernel Synchronization II
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Oregon Health and Science University
Chapter 6: Synchronization Tools
Jonathan Walpole Computer Science Portland State University
Monitors and Inter-Process Communication
CSE 542: Operating Systems
Presentation transcript:

CS510 Concurrent Systems Jonathan Walpole

Introduction to Concurrency

Why is Concurrency Important? Scaling up vs scaling out Concurrent hardware vs concurrent software

Sequential Programming Sequential programming with processes -Private memory -a program -data -stack -heap -CPU context -program counter -stack pointer -registers

Sequential Programming Example int i = 0 i = i + 1 print i What output do you expect? Why?

Concurrent Programming Concurrent programming with threads -Shared memory -a program -data -heap -Private stack for each thread -Private CPU context for each thread -program counter -stack pointer -registers

Concurrent Threads Example int i = 0 Thread 1:i = i + 1 print i What output do you expect with 1 thread? Why?

Concurrent Threads Example int i = 0 Thread 1:i = i + 1Thread 2:i = i + 1print i What output do you expect with 2 threads? Why?

Race Conditions How is i = i + 1 implemented? load i to register increment register store register value to i Registers are part of each thread’s private CPU context

Race Conditions Thread 1 load i to regn inc regn store regn to i Thread 2 load i to regn inc regn store regn to i

Critical Sections What is the danger in the previous example? How can we make it safe? What price do we pay?

Mutual Exclusion

How can we implement it?

Locks Each shared data has a unique lock associated with it Threads acquire the lock before accessing the data Threads release the lock after they are finished with the data The lock can only be held by one thread at a time

Locks - Implementation How can we implement a lock? How do we test to see if its held? How do we acquire it? How do we release it? How do we block/wait if it is already held when we test?

Does this work? bool lock = false while lock = true; /* wait */ lock = true;/* lock */ critical section lock = false;/* unlock */

Atomicity Lock and unlock operations must be atomic Modern hardware provides a few simple atomic instructions that can be used to build atomic lock and unlock primitives.

Atomic Instructions Atomic "test and set" (TSL) Compare and swap (CAS) Load-linked, store conditional (ll/sc)

Atomic Test and Set TSL performs the following in a single atomic step: -set lock and return its previous value Using TSL in a lock operation -if the return value is false then you got the lock -if the return value is true then you did not -either way, the lock is set

Spin Locks while TSL (lock);/* spin while return value is true */ critical section lock = false

Spin Locks What price do we pay for mutual exclusion? How well will this work on uniprocessor?

Blocking Locks How can we avoid wasting CPU cycles? How can we implement sleep and wakeup? -context switch when acquire finds the lock held -check and potential wakeup on lock release -system calls to acquire and release lock But how can we make these system calls atomic?

Blocking Locks Is this better than a spinlock on a uniprocessor? Is this better than a spinlock on a multiprocessor? When would you use a spinlock vs a blocking lock on a multiprocessor?

Tricky Issues With Locks 0 thread consumer { 1 while(1) { 2 if(count==0) { 3 sleep(empty) 4 } 5 c = buf[OutP] 6 OutP = OutP + 1 mod n 7 count--; 8 if (count == n-1) 9 wakeup(full) 10 // Consume char 11 } 12 } 0 thread producer { 1 while(1) { 2 // Produce char c 3 if (count==n) { 4 sleep(full) 5 } 6 buf[InP] = c; 7 InP = InP + 1 mod n 8 count++ 9 if (count == 1) 10 wakeup(empty) 11 } 12 } n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count

Conditional Waiting Sleeping while holding the lock leads to deadlock Releasing the lock then sleeping opens up a window for a race Need to atomically release the lock and sleep

Semaphores Semaphore S has a value, S.val, and a thread list, S.list. Down (S) S.val = S.val - 1 If S.val < 0 add calling thread to S.list; sleep; Up (S) S.val = S.val + 1 If S.val <= 0 remove a thread T from S.list; wakeup (T);

Semaphores Down and up are assumed to be atomic How can we implement them? -on a uniprocessor? -on a multiprocessor?

Semaphores in Producer-Consumer 0 thread producer { 1 while(1){ 2 // Produce char c... 3 down(empty_buffs) 4 buf[InP] = c 5 InP = InP + 1 mod n 6 up(full_buffs) 7 } 8 } 0 thread consumer { 1 while(1){ 2 down(full_buffs) 3 c = buf[OutP] 4 OutP = OutP + 1 mod n 5 up(empty_buffs) 6 // Consume char... 7 } 8 } Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;

Monitors and Condition Variables Correct synchronization is tricky What synchronization rules can we automatically enforce? -encapsulation and mutual exclusion -conditional waiting

Condition Variables Condition variables (cv) for use within monitors cv.wait(mon-mutex) -thread blocked (queued) until condition holds -Must not block while holding mutex! -Monitor’s mutex must be released! -Monitor mutex need not be specified by programmer if compiler is enforcing mutual exclusion cv.signal() -signals the condition and unblocks (dequeues) a thread

Condition Variables –Semantics What can I assume about the state of the shared data? -when I wake up from a wait? -when I issue a signal?

Hoare Semantics Signaling thread hands monitor mutex directly to signaled thread Signaled thread can assume condition tested by signaling thread holds

Mesa Semantics Signaled thread eventually wakes up, but signaling thread and other threads may have run in the meantime Signaled thread can not assume condition tested by signaling thread holds -signals are a hint Broadcast signal makes sense with MESA semantics, but not Hoare semantics

Memory Invariance A thread executing a sequential program can assume that memory only changes as a result of the program statements -can reason about correctness based on pre and post conditions and program logic A thread executing a concurrent program must take into account the points at which memory invariants may be broken -what points are those?