Lecture 10 Locks.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
1 Operating Systems, 122 Practical Session 5, Synchronization 1.
Lecture 9 VM & Threads. Review through VAX/VMS The VAX-11 architecture comes from DEC 1970’s The OS is known as VAX/VMS (or VMS) One primary architect.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
Lecture 11 PA2, lock, and CV. Lab 3: Demand Paging Implement the following syscalls xmmap, xmunmap, vcreate, vgetmem/vfreemem, srpolicy Deadline: March.
W4118 Operating Systems Instructor: Junfeng Yang.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
CS510 Concurrent Systems Introduction to Concurrency.
Thread-Safe Programming Living With Linux. Thread-Safe Programming Tommy Reynolds Fedora Documentation Project Steering Committee
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization.
Multi-threaded Programming with POSIX Threads CSE331 Operating Systems Design.
POSIX Threads Nezer J. Zaidenberg. References  Advanced programming for the UNIX environment (2nd edition chapter This material does not exist.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Semaphore and Mutex Operations.
Chapter 28 Locks Chien-Chung Shen CIS, UD
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
CS252: Systems Programming
Lecture 9 VM & Threads. Virtual Memory Approaches Time Sharing, Static Relocation, Base, Base+Bounds Segmentation Paging Too slow – TLB Too big – smaller.
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
Synchronizing Threads with Semaphores
Lecture 15 Semaphore & Bugs. Concurrency Threads Locks Condition Variables Fixing atomicity violations and order violations.
P4: Multithreaded Programming Zhenxiao Luo CS537 Spring 2010.
CS 3204 Operating Systems Godmar Back Lecture 7. 12/12/2015CS 3204 Fall Announcements Project 1 due on Sep 29, 11:59pm Reading: –Read carefully.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Software Systems Advanced Synchronization Emery Berger and Mark Corner University.
Lecture 12 CV. Last lecture Controlling interrupts Test and set (atomic exchange) Compare and swap Load linked and store conditional Fetch and add and.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
CS 2200 Presentation 18b MUTEX. Questions? Our Road Map Processor Networking Parallel Systems I/O Subsystem Memory Hierarchy.
PThread Synchronization. Thread Mechanisms Birrell identifies four mechanisms commonly used in threading systems –Thread creation –Mutual exclusion (mutex)
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CSC 660: Advanced Operating SystemsSlide #1 CSC 660: Advanced OS Synchronization.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Web Server Architecture Client Main Thread for(j=0;j
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
pThread synchronization
1 Programming with Shared Memory - 2 Issues with sharing data ITCS 4145 Parallel Programming B. Wilkinson Jan 22, _Prog_Shared_Memory_II.ppt.
回到第一頁 What are threads n Threads are often called "lightweight processes” n In the UNIX environment a thread: u Exists within a process and uses the process.
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
CS703 - Advanced Operating Systems
CS703 – Advanced Operating Systems
Operating Systems Concurrency ENCE 360.
Background on the need for Synchronization
Lecture 13 Concurrency Bugs
Chien-Chung Shen CIS/UD
Concurrency: Locks Questions answered in this lecture:
Jonathan Walpole Computer Science Portland State University
Unix System Calls and Posix Threads
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Chapter 30 Condition Variables
Implementing Mutual Exclusion
Synchronization Primitives – Semaphore and Mutex
Implementing Mutual Exclusion
Kernel Synchronization II
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Lecture 20: Synchronization
Lecture 12 CV and Semaphores
Presentation transcript:

Lecture 10 Locks

Scheduling Control: Mutex/Lock Basic pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; pthread_mutex_lock(&lock); x = x + 1; // or whatever your critical section is pthread_mutex_unlock(&lock); Other variants int pthread_mutex_trylock(pthread_mutex_t *mutex); int pthread_mutex_timedlock(pthread_mutex_t *mutex, struct timespec *abs_timeout);

int main(int argc, char *argv[]) { if (argc != 2) { #include <stdio.h> #include "mythreads.h" #include <stdlib.h> #include <pthread.h> int max; // shared global variable volatile int counter = 0; void * mythread(void *arg) { char *letter = arg; int i; // stack printf("%s: begin\n", letter); for (i = 0; i < max; i++) { counter = counter + 1; } printf("%s: done\n", letter); return NULL; int main(int argc, char *argv[]) { if (argc != 2) { fprintf(stderr, "usage: ...\n"); exit(1); } max = atoi(argv[1]); pthread_t p1, p2; printf("main: begin [counter = %d] [%x]\n", counter, (unsigned int) &counter); Pthread_create(&p1, NULL, mythread, "A"); Pthread_create(&p2, NULL, mythread, "B"); // join waits for the threads to finish Pthread_join(p1, NULL); Pthread_join(p2, NULL); printf("main: done\n [counter: %d]\n [should: %d]\n", counter, max*2); return 0; Always the same result on a uni-processor machine? Need lock

Controlling Interrupts void lock() { DisableInterrupts(); } void unlock() { EnableInterrupts(); Controlling Interrupts Would it work? Problems: We have to trust the application Does not work on multiprocessors Inefficient Only used in limited contexts

Evaluating Locks Correctness Fairness Performance

typedef struct __lock_t { int flag; } lock_t; void init(lock_t typedef struct __lock_t { int flag; } lock_t; void init(lock_t *mutex) { // 0 -> lock is available, 1 -> held mutex->flag = 0; } void lock(lock_t *mutex) { while (mutex->flag == 1) // TEST the flag ; // spin-wait (do nothing) mutex->flag = 1; // now SET it! void unlock(lock_t *mutex) {

Test And Set (Atomic Exchange) int TestAndSet(int *ptr, int new) { int old = *ptr; // fetch old value at ptr *ptr = new; // store ’new’ into ptr return old; // return the old value }

typedef struct __lock_t { int flag; } lock_t; void init(lock_t typedef struct __lock_t { int flag; } lock_t; void init(lock_t *mutex) { // 0 -> lock is available, 1 -> held mutex->flag = 0; } void lock(lock_t *mutex) { while (TestAndSet(&lock->flag, 1) == 1) ; // spin-wait (do nothing) void unlock(lock_t *mutex) {

Evaluating Spin Locks Correctness: yes Fairness: no Performance: bad on single CPU reasonable if the number of threads roughly equals the number of CPUs

Compare-And-Swap int CompareAndSwap(int *ptr, int expected, int new) { int actual = *ptr; if (actual == expected) *ptr = new; return actual; }

typedef struct __lock_t { int flag; } lock_t; void init(lock_t typedef struct __lock_t { int flag; } lock_t; void init(lock_t *mutex) { // 0 -> lock is available, 1 -> held mutex->flag = 0; } void lock(lock_t *mutex) { while (CompareAndSwap(&lock->flag, 0, 1) == 1) ; // spin-wait (do nothing) void unlock(lock_t *mutex) {

Load-Linked and Store-Conditional int LoadLinked(int *ptr) { return *ptr; } int StoreConditional(int *ptr, int value) { if (no update to *ptr since the LoadLinked to it) { *ptr = value; return 1; // success! } else { return 0; // failed to update

void lock(lock_t *mutex) { while (1) { while (LoadLinked(& mutex->flag) == 1) ; // spin until it’s zero if (StoreConditional(& mutex->flag, 1) == 1) return; // if set-it-to-1 succeeded: all done // otherwise: try it all over again } void unlock(lock_t *mutex) { mutex->flag = 0;

Fetch-And-Add and Ticket Locks int FetchAndAdd(int *ptr) { int old = *ptr; *ptr = old + 1; return old; }

typedef struct __lock_t { int ticket; int turn; } lock_t; void lock_init(lock_t *lock) { lock->ticket = 0; lock->turn = 0; } void lock(lock_t *lock) { int myturn = FetchAndAdd(&lock->ticket); while (lock->turn != myturn) ; // spin void unlock(lock_t *lock) { FetchAndAdd(&lock->turn);

Spinning is Bad Imagine two threads on a single processor Imagine N threads on a single processor

void init() { flag = 0; } void lock() { while (TestAndSet(&flag, 1) == 1) yield(); // give up the CPU void unlock() {

Sleeping Instead Of Spinning On Solaris, OS provides two calls: park() to put a calling thread to sleep unpark(threadID) to wake a particular thread as designated by threadID

typedef struct __lock_t { int flag; int guard; queue_t typedef struct __lock_t { int flag; int guard; queue_t *q; } lock_t; void lock_init(lock_t *m) { m->flag = 0; m->guard = 0; queue_init(m->q); }

void lock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) ; //acquire guard lock by spinning if (m->flag == 0) { m->flag = 1; // lock is acquired m->guard = 0; } else { queue_add(m->q, gettid()); park(); }

void unlock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) ; //acquire guard lock by spinning if (queue_empty(m->q)) m->flag = 0; // let go of lock; no one wants it else // hold lock (for next thread!) unpark(queue_remove(m->q)); m->guard = 0; }

void lock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) ; //acquire guard lock by spinning if (m->flag == 0) { m->flag = 1; // lock is acquired m->guard = 0; } else { queue_add(m->q, gettid()); setpark(); // new code park(); }

Different Supports On Linux, OS provides two calls: Two-Phase Locks futex_wait(address, expected) puts the calling thread to sleep, assuming the value at address is equal to expected. If it is not equal, the call returns immediately. futex_wake(address) wakes one thread that is waiting on the queue. Two-Phase Locks

void lock(lock_t. m) { int v; / void lock(lock_t *m) { int v; /* Bit 31 was clear, we got the mutex (fastpath) */ if (atomic_bit_test_set (m, 31) == 0) return; atomic_increment (m); while (1) { if (atomic_bit_test_set (m, 31) == 0) { atomic_decrement (m); return; } /* We have to wait now. First make sure the futex value we are monitoring is truly negative (i.e. locked). */ v = *m; if (v >= 0) continue; futex_wait (m, v);

void unlock(lock_t. m) { / void unlock(lock_t *m) { /* Adding 0x80000000 to the counter results in 0 if & only if there are not other interested threads */ if (atomic_add_zero (mutex, 0x80000000)) return; /* There are other threads waiting for this mutex, wake one of them up. */ futex_wake (mutex); }

Concurrent Counters typedef struct __counter_t { int value; } counter_t; void init(counter_t *c) { c->value = 0; } void increment(counter_t *c) { c->value++; void decrement(counter_t *c) { c->value--; int get(counter_t *c) { return c->value;

typedef struct __counter_t { int global; // global count pthread_mutex_t glock; // global lock int local[NUMCPUS]; // local count (per cpu) pthread_mutex_t llock[NUMCPUS]; // ... and locks int threshold; // update frequency } counter_t; void init(counter_t *c, int threshold) { c->threshold = threshold; c->global = 0; pthread_mutex_init(&c->glock, NULL); int i; for (i = 0; i < NUMCPUS; i++) { c->local[i] = 0; pthread_mutex_init(&c->llock[i], NULL); }

void update(counter_t void update(counter_t *c, int threadID, int amt) { pthread_mutex_lock(&c->llock[threadID]); c->local[threadID] += amt; // assumes amt > 0 if (c->local[threadID] >= c->threshold) { pthread_mutex_lock(&c->glock); c->global += c->local[threadID]; pthread_mutex_unlock(&c->glock); c->local[threadID] = 0; } pthread_mutex_unlock(&c->llock[threadID]); int get(counter_t *c) { int val = c->global; return val; // only approximate!

Concurrent Linked Lists typedef struct __node_t { int key; struct __node_t *next; } node_t; typedef struct __list_t { node_t *head; pthread_mutex_t lock; } list_t; void List_Init(list_t *L) { L->head = NULL; pthread_mutex_init(&L->lock, NULL); }

int List_Insert(list_t int List_Insert(list_t *L, int key) { pthread_mutex_lock(&L->lock); node_t *new = malloc(sizeof(node_t)); if (new == NULL) { perror("malloc"); pthread_mutex_unlock(&L->lock); return -1; // fail } new->key = key; new->next = L->head; L->head = new; return 0; // success

int List_Lookup(list_t int List_Lookup(list_t *L, int key) { pthread_mutex_lock(&L->lock); node_t *curr = L->head; while (curr) { if (curr->key == key) { pthread_mutex_unlock(&L->lock); return 0; // success } curr = curr->next; return -1; // failure

int List_Insert(list_t int List_Insert(list_t *L, int key) { // synchronization not needed node_t *new = malloc(sizeof(node_t)); if (new == NULL) { perror("malloc"); return; } new->key = key; // just lock critical section pthread_mutex_lock(&L->lock); new->next = L->head; L->head = new; pthread_mutex_unlock(&L->lock);

int List_Lookup(list_t int List_Lookup(list_t *L, int key) { int rv = -1; pthread_mutex_lock(&L->lock); node_t *curr = L->head; while (curr) { if (curr->key == key) { rv = 0; break; } curr = curr->next; pthread_mutex_unlock(&L->lock); return rv; // now both success and failure

Others Hand-over-hand locking for list Concurrent queues Concurrent hash table

Next: Condition Variables Semaphores