Download presentation
Presentation is loading. Please wait.
1
Lecture 9 Synchronization
2
Introduction Variable Visibility Code Reordering
Each thread has its own stack Shared primitive variables are copied amongst threads! Modifying a copy does not ensure that the other copies are modified too! Code Reordering By doing so the compiler can optimize the code to be able to execute it faster Compiler guarantees safe reordering for non-concurrent execution In concurrent systems the compiler ensures a happens-before relationships Following these limitations the compiler can optimize the code as well – to a lesser extent Java Mutual Exclusion Tools They are a collections of mechanisms in Java RTE to ensure mutual exclusion They allow exclusive access to critical sections of code They can also ensure a happens-before relationship between the actions of objects
3
Variable Thread Visibility: Volatile
public class Test extends Thread { boolean keepRunning = true; public static void main(String[] args) throws InterruptedException { Test t = new Test(); t.start(); Thread.sleep(1000); System.out.println(System.currentTimeMillis() + ": keepRunning is false"); t.keepRunning = false; } public void run() { int x = 10; while (keepRunning) x++; System.out.println("x:"+x); What is the problem here? Line 14 goes on an endless loop even though keepRunning is false! Why? Upon execution each thread has its own stack Both main thread stack and Test thread stack have a copy of the shared variable keepRunning Modifying the main thread keepRunning value does not effect Test thread keepRunning value – even though they are the same variable!
4
Variable Thread Visibility: Volatile
public class Test extends Thread { volatile boolean keepRunning = true; public static void main(String[] args) throws InterruptedException { Test t = new Test(); t.start(); Thread.sleep(1000); System.out.println(System.currentTimeMillis() + ": keepRunning is false"); t.keepRunning = false; } public void run() { int x = 10; while (keepRunning) x++; System.out.println("x:"+x); Solution? Adding volatile keyword before the keepRunning variable! It forces the JVM to keep up to date values of all copies of keepRunning See line 3 Result? Line 14: Code is executed correctly, and loop ends as expected once keepRunning value is changed to false.
5
Happens-Before Relationship
If X must run before Y then X must have a happens-before relationship with Y This ensures that the compiler does not apply code reordering in that section of code Happens-before defines a partial ordering on all actions within the program. To guarantee that the thread executing action Y can see the results of action X whether or not X and Y occur in different threads there must be a happens-before relationship between X and Y. In the absence of a happens-before ordering between two operations, the JVM is free to reorder them as it wants Happens-before ensures the following partial ordering: ordering of actions in 'time' ordering of read and write to memory Two threads performing write and read to memory can be consistent to each other actions in terms of clock time but might not see each others changes consistently (Memory Consistency Errors) unless they have happens-before relationship. In the illustrative image – there is no happens-before relation between X and Y
6
Code Reordering in Concurrent Systems
The Java Memory Model is specified in terms of actions: reads and writes to variables locks and unlocks of monitors starting and joining with threads Happens-before relation: If thread T1 is executing action A1, and thread T2 is executing thread A2 If there is a happens-before relation between A1 and A2 then A2 can see the results of executing A1 before starting to execute A2! In the absence of a happens before ordering between two operations The JVM is free to reorder them as it pleases!
7
Java Built-in Happens-Before Rules
Program Order Each action in a thread happens before every action in that thread that comes later in the program order Monitor Lock An unlock on a monitor lock happens before every subsequent lock on that same monitor lock Volatile Variable A write to a volatile field happens before every subsequent read of that same field Thread Start A call to Thread.start() on a thread happens before every action in the started thread
8
Java Built-in Happens-Before Rules: Continued
Thread Termination Any action in a thread happens before any other thread detects that thread has terminated either by successfully return from Thread.join() or by Thread.isAlive()returning false Interruption A thread calling interrupt on another thread happens before the interrupted thread detects the interrupt either by having InterruptedException thrown or invoking isInterrupted() or interrupted() Object Deletion - Finalizer The end of a constructor for an object happens before the start of the finalizer for that object Note: finalize() is called by the garbage collector on an object when garbage collection determines that there are no more references to the object – in order to delete it Transitivity If A happens before B, and B happens before C, then A happens before C
9
Balking: Enforcing Preconditions
Method fails if precondition does not hold Checks if a precondition holds before executing function code If the check fails, function may throw an exception or return an error response in its return value Checking pre-condition is done by applying Conditional Statements if-statements, switch-cases Example: Removing first element in a list using: removeFirst() Precondition: isEmpty() == false If precondition fails, function throws exception Otherwise, function executes, and first element is removed
10
Guarded Suspension: Enforcing Preconditions
Done in three steps: Condition is checked, if it does hold, proceed with function execution If condition does not hold, suspend method invocation Once condition holds, resume the suspended method to execute its code Resuming is done from the line after that line which suspended it! Suspension: To suspend a method, we suspend the execution of its executing thread Done by calling the wait()method Resuming: To resume method execution, we notify its executing thread Done by calling nofity() or notifyAll() methods
11
Guarded Suspension: Bounded Blocking Queue Example
Bounded Queue - a queue that accepts max number of values Empty: remove() Fails by returning null Full: add() Fails by returning false Bounded Blocking Queue – blocks until precondition holds Blocks until add() is executed Why? Blocks until remove() is executed
12
Guarded Suspension: Bounded Blocking Queue Implementation
public class BoundedBlockingQueue<E> { private final Queue<E> queue = new LinkedList<E>(); private final int capacity; private final AtomicInteger count = new AtomicInteger(0); public int size() { return count.get(); } public synchronized void add(E e) throws RuntimeException { if (e == null) throw new NullPointerException(“illegal null element received"); while (count.get() == capacity) wait(); queue.add(e); if (count.getAndIncrement() == 0) notifyAll(); } public BoundedBlockingQueue(int capacity) { if (capacity <= 0) throw new InvalidArgumentException(“illegal queue capacity"); this.capacity = capacity; } public synchronized E remove() throws NoSuchElementException { E e; while (count.get() == 0) wait(); e = queue.remove(); int oldCount = count.getAndDecrement(); if (oldCount == this.capacity) notifyAll(); return e; add() In which case line 12 will be executed? In which case line 15 will be executed? remove() In which case line 9 will be executed? In which case line 12 will be executed?
13
Implementation Rules: Guarded Suspension
Guarded Wait Blocking a thread to wait()is done only for a pre-condition to be met Guard Atomicity Condition checks are placed in a while loop Multiple Guard Atomicity Multiple conditions to be waited for, must be placed in the same loop Waking Threads Ensuring liveness done by waking up waiting threads Multiple Waiting Thread Multiple threads waiting for a single condition – require the use of notifyAll() This ensures that no thread misses an occurring event These rules ensure successful implementation of guarded suspension
14
Monitors, Mutex, and Locks: Definitions
Monitor is a synchronization construct that allows threads to have: mutual exclusion - using locks Using lock() to lock the lock Using unlock() to unlock the lock Cooperation The ability to make threads wait for certain condition to be true It helps threads to cooperate with one another to work together towards a common goal threads will be moved to wait status and will be notified once lock is released Done by using wait()to move them to wait status and notify() to resume their execution again Mutex – mut(ual) ex(clusion) A lock – they help threads to work independently on shared data without interfering with one another
15
Acquiring the lock: Illustration
A new thread enters the “entry set” in attempt to execute a synchronized function A thread attempts to acquire the lock If it fails to acquire the lock, it moves to wait set Once the lock is released by another thread it attempts to acquires the lock If it succeeds in 2 or in 4 it executes the code, releases the lock, and exits Note: The Owner is, at any given moment, the thread that has successfully acquired the lock.
16
Java Mutual Exclusion Tools
There are several tools provided by java to allow mutual exclusion Mutual exclusion refers to code and not objects! Synchronized Keyword It can be added to function headers or a function scope Code inside the scope is executed only by a single thread at a given time Monitors Each object in Java has a built-in mutex (lock) Enforcing mutual exclusion is done by acquiring (lock.lock())the lock And once done, by releasing (lock.unlock()) the lock Cooperating is done by using lock.wait() and lock.notify()
17
synchronized keyword Ensures mutual exclusion to a section of code Mutual exclusion means exclusive access for a thread to the marked code Example: public synchronized int add() One thread is allowed to execute add()at any given time It solves both visibility and reordering problems: write to memory during synchronized code section is guaranteed to be returned to all read operations following it Using synchronized we do access the lock explicitly Implicitly, to enter the critical section, this is locked Locking this is possible if this is not locked – otherwise we wait!
18
synchronized keyword: EvenCounter Class Example
/* A simple counter class, which keeps an even counter value */ class EvenCounter { /* the internal state counter */ private int counter = 0; /* default constructor */ public Even() { } int getCounter() { return counter; } int setCounter(int count) { counter = count; } /* increment the counter. * return the current counter value */ public synchronized int increment() { setCounter(getCounter() + 1); return getCounter(); } Adding synchronized keyword to a method ensures mutual exclusion of the code
19
synchronized keyword: EvenCounter Class Example
/* A simple counter class, which keeps an even counter value */ class EvenCounter { /* the internal state counter */ private int counter = 0; /* default constructor */ public Even() { } int getCounter() { return counter; } int setCounter(int count) { counter = count; } /* increment the counter. * return the current counter value */ public int increment() { synchronized(this){ setCounter(getCounter() + 1); return getCounter(); } Adding synchronized to a method header is the same as synchronizing this for the complete method scope Synchronized code begins at line 11, and ends at 15. For this scope any access to it by threads is done in a sequential manner, not in parallel.
20
Java Locks: ReentrantLock
A reentrant mutual exclusion lock has the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements In addition it has several extended capabilities than synchronized Characteristics: It is owned by the thread last successfully locked it, and did not unlock it yet A thread invoking lock will return, successfully acquiring the lock, when the lock is not owned by another thread Reentrant means that the method will return immediately if the current thread already owns the lock Can’t lock it again by the same thread – if a lock() succeeds for a thread, rest of lock() invocations done by the same thread do nothing Released once - regardless of the number of lock() invocations. Fairness Feature: The constructor for this class accepts an optional fairness parameter When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise, this lock does not guarantee any particular access order ReentrantLock key features: Ability to timeout while waiting for lock Power to create fair lock API to get list of waiting thread for lock Flexibility to try for lock without blocking
21
Java Locks: ReentrantLock Example
/* A simple counter class, which keeps an even counter value */ class EvenCounter { /* the internal state counter */ private int counter = 0; private static ReentrantLock lock = new ReentrantLock(); ./* default constructor */ . public int increment() { lock.lock(); try{ setCounter(getCounter() + 1); } finally { lock.unlock(); } return getCounter(); Line 12 ensures mutual exclusion by giving the executing thread the lock Lines 14-16: always unlock in a finally block in cases of exceptions – the lock is released always!
22
Java Semaphore: Non-reentrant Lock
It is a construct that holds a finite number of permits – allows finite concurrent access! Each attempt by a thread to access a critical section – is done by acquiring one or more permits If there are enough permits to give, access is given, otherwise access is denied by making it wait Syntax: Semaphore semaphore = new Semaphore(5); Example semaphore that contains 5 permits semaphore.acquire(); Reduces semaphore permits count down by one if the number of permits is a positive value If value is zero, it blocks the acquiring thread semaphore.release(); Increases semaphore permits count up by one If value was zero, it unblocks blocked threads – allowing them to attempt acquire again Not Re-entrant Thread that calls acquire() twice, must call release() twice! Semaphore can be released by a thread other than owner This means no lock ownership!
23
Semaphore: Implementation
class Semaphore { private final int permits; // the maximum number of permits private int free; // the number of unused permits public Semaphore(int permits) {this.permits = permits; this.free = permits; } free > 0 (There is an available permit) free – 1 public synchronized void acquire() throws InterruptedException { while (free<=0) this.wait(); free--; } free < permits (There is a used permit) free + 1 public synchronized void release() throws InterruptedException { if (free < permits) { free++; this.notifyAll();
24
Why Semaphore? Why not just mutex?
Where non-binary semaphores are used? They are used when counting is important! Resource Management: (Example 1) Semaphore value indicates the number of resources available in our resource pool Obtaining control of a resource a task must first obtain a semaphore – by decrementing its value When the count value reaches zero there are no free resources When a task finishes with the resource it returns the semaphore – by incrementing its value Event Counting: (Example 2) Event handler will give a semaphore each time an event occurs – by decrementing its value A handler task will take a semaphore each time it processes an event - by incrementing its value The count value is the difference between the number of events that have occurred and the number that have been processed! In this case it is desirable for the count value to be zero when the semaphore is created.
25
Semaphore: Usage Example
public class TwoResourcesInOurPool{ private Semaphore semaphore = new Semaphore(2); public boolean doSomething() throws InterruptedException { semaphore.acquire(); try { //acquire resource //do something }finally{ //release resource semaphoe.release(); } Simple example that limits acquiring resources to two, when we have a limited number of resources of 2 Useful in thread pools, useful in socket pool, or anything of a resrouce pool
26
Java Locks: ReentrantReadWriteLock
Allows readers and writers to access a shared resource under different policies Reading Policy – for threads not attempting to mutate value – read only Writing Policy – for threads attempting to mutate values – modify state Reading Policy: Multiple readers may access resource concurrently Readers must wait if writer acquires lock This prevents writer starvation, but may allow reader starvation! This makes the lock suitable for few writes, but many reads! Writing Policy: One writer may access resource at any given time This is allowed only if no readers are reading from it
27
ReentrantReadWriteLock: Usage Example
//defining, and retrieving both locks ReadWriteLock rwLock = new ReentrantReadWriteLock(); Lock readLock = rwLock.readLock(); Lock writeLock = rwLock.writeLock(); //to read readLock.lock(); try { // reading data } finally { readLock.unlock(); } //to write writeLock.lock(); // update data } finally { writeLock.unlock(); }
28
ReentrantReadWriteLock: Implementation
interface ReadWriteLock { public ReentrantReadWriteLock.WriteLock writeLock(); public ReentrantReadWriteLock.ReadLock readLock(); } class ReentrantReadWriteLock implements ReadWriteLock{ //Variables to maintain read and write lock count – global variables accessible by inner classes ReadLock and WriteLock private int readLockCount; private int writeLockCount; /* readLock and writeLock instances of the inner classes */ private final ReentrantReadWriteLock.ReadLock readerLock; private final ReentrantReadWriteLock.WriteLock writerLock; /* getter functions for readLock and writeLock */ public ReentrantReadWriteLock.WriteLock writeLock() { return writerLock; } public ReentrantReadWriteLock.ReadLock readLock() { return readerLock; } /** * Constructor – creates ReadLock and WriteLock instances */ public ReentrantReadWriteLock() { readerLock = new ReadLock(); writerLock = new WriteLock(); } public class ReadLock{ } public class WriteLock{ }
29
ReadLock Inner-Class Implementation
//More than one threads can acquire readLock at a time, // provided no other thread is acquiring writeLock at same time public class ReadLock{ public synchronized void lock() { if(writeLockCount==0) readLockCount++; // if some other thread is acquiring write lock at that time, then the current thread waits else try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } public synchronized void unlock() { readLockCount--; //decrement readLockCount. //If readLockCount has become 0, all threads waiting to write will be notified and can acquire lock if(readLockCount==0) notify(); }
30
WriteLock Inner-Class Implementation
//Only one threads can acquire writeLock at a time. // Means writeLock can only be obtained if no other thread is acquiring read or write lock at that time. public class WriteLock{ public synchronized void lock() { // writeLock can only be obtained if no other thread is acquiring read or write lock at that time if(writeLockCount==0 && readLockCount==0){ writeLockCount++; } // if some other thread is acquiring read or write lock at that time, then the current thread waits else try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } public synchronized void unlock() { writeLockCount--; notify(); }
31
Java Monitors Cost: Speed & Memory
Memory Cost: After thread exits, it synchronizes with the main memory Thread may cache copies of memory in its own memory space e.g. CPU registers, CPU cache Blocking Performance Loss: Threads must wait for each other, due to conditions Waiting time is wasted, the thread cannot advance while waiting!
32
Java Atomic Instructions
These instructors happen all-at-once The scheduler cannot stop a thread in the middle of an atomic operation Only before or after it Most CPU operations are atomic add, mov, etc. CPUs offer a set of atomic instructions for multi-threading. For example: CompareAndSet or cas Java has several classes that are Atomic Implemented by using compareAndSet All of these classes begin with Atomic***: AtomicInteger, AtomicFloat, etc.
33
CompareAndSet Implementation
/** * Atomically sets the value to the given updated value * if the current value ==} the expected value. * true if successful. False return indicates that * the actual value was not equal to the expected value. */ public final boolean compareAndSet(int expect, int update) { if (value != expect) return false; value = update; return true; } We wish to change the value to update Before we do that, we check whether value still equals expect This is done to ensure that no execution of compareAndSet() has modified value before we could If the value has been modified, we return a failure message Otherwise we modify the value to the new one and return a success message This is atomic at CPU level
34
AtomicInteger: getAndIncrement() Implementation
/** * Atomically increments by one the current value. * the previous value */ public final int getAndIncrement() { for (;;) { int current = get(); int next = current + 1; if (compareAndSet(current, next)) return current; } To increment the value of the AtomicInteger by one Store the current and next values in their own variables Executing compareAndSet will provide one of two results: Failure – thus no change, and another iteration of the loop is applied Success – thus value has changed, method ends execution with return statement
35
EvenCounter implementation using AtomicInteger
class EvenCounter { private AtomicInteger counter = new AtomicInteger(0); public int increment() { int val; do { val = counter.get(); }while (!counter.compareAndSet(val, val + 2)); } public int get() { return counter.get(); If there are n threads t_1,...,t_n that attempt to invoke add() one time all at once then, without the loss of generality: t_1 will enter the while loop once, t_2 at most twice, … and t_n at most n times.
36
Atomic Instructions: Advantages
Threads are given a choice When their requested action cannot be performed - they are given a choice on what to do, code wise This is instead of just being blocked in blocking algorithms No thread suspension Code runs significantly faster than the synchronized counterpart Threads are never blocked – thus no system calls used! Reduced Thread Latency Latency is the time wasted between a requested action becomes possible and the thread actually performs it Since threads are not suspended in non-blocking algorithms they do not have to pay the expensive, slow reactivation overhead When a requested action becomes possible threads can respond faster and thus reduce their response latency.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.