Problems with Locks Andrew Whitaker CSE451.

Slides:



Advertisements
Similar presentations
50.003: Elements of Software Construction Week 6 Thread Safety and Synchronization.
Advertisements

Synchronization. How to synchronize processes? – Need to protect access to shared data to avoid problems like race conditions – Typical example: Updating.
Concurrent Programming Abstraction & Java Threads
Memory Consistency Models Kevin Boos. Two Papers Shared Memory Consistency Models: A Tutorial – Sarita V. Adve & Kourosh Gharachorloo – September 1995.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Concurrency 101 Shared state. Part 1: General Concepts 2.
Lock-free Cache-friendly Software Queue for Decoupled Software Pipelining Student: Chen Wen-Ren Advisor: Wuu Yang 學生 : 陳韋任 指導教授 : 楊武 Abstract Multicore.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
“THREADS CANNOT BE IMPLEMENTED AS A LIBRARY” HANS-J. BOEHM, HP LABS Presented by Seema Saijpaul CS-510.
Synchronization in Java Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park.
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
29-Jun-15 Java Concurrency. Definitions Parallel processes—two or more Threads are running simultaneously, on different cores (processors), in the same.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
CDP 2012 Based on “C++ Concurrency In Action” by Anthony Williams and The C++11 Memory Model and GCC WikiThe C++11 Memory Model and GCC Created by Eran.
CDP 2013 Based on “C++ Concurrency In Action” by Anthony Williams, The C++11 Memory Model and GCCThe C++11 Memory Model and GCC Wiki and Herb Sutter’s.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
Internet Software Development Controlling Threads Paul J Krause.
JAVA MEMORY MODEL AND ITS IMPLICATIONS Srikanth Seshadri
Sharing Objects  Synchronization  Atomicity  Specifying critical sections  Memory visibility  One thread’s modification seen by the other  Visibility.
Shared Memory Consistency Models. SMP systems support shared memory abstraction: all processors see the whole memory and can perform memory operations.
Java Thread and Memory Model
Multiprocessor Cache Consistency (or, what does volatile mean?) Andrew Whitaker CSE451.
CHARLES UNIVERSITY IN PRAGUE faculty of mathematics and physics Advanced.NET Programming I 5 th Lecture Pavel Ježek
Fundamentals of Parallel Computer Architecture - Chapter 71 Chapter 7 Introduction to Shared Memory Multiprocessors Yan Solihin Copyright.
More on Thread Safety CSE451 Andrew Whitaker. Review: Thread Hazards Safety hazards  “Program does the wrong thing” Liveness hazards  “Program never.
Week 9, Class 3: Java’s Happens-Before Memory Model (Slides used and skipped in class) SE-2811 Slide design: Dr. Mark L. Hornick Content: Dr. Hornick Errors:
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Sun Proprietary/Confidential: Internal Use Only 1 Multi-Threading Primer Byron Nevins December 10, 2007.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Java Thread Programming
Concurrency 2 CS 2110 – Spring 2016.
Advanced .NET Programming I 11th Lecture
Software Coherence Management on Non-Coherent-Cache Multicores
Multi-processor Scheduling
CS703 – Advanced Operating Systems
Atomic Operations in Hardware
Atomic Operations in Hardware
Atomicity CS 2110 – Fall 2017.
Threads and Memory Models Hal Perkins Autumn 2011
Cache Coherence Protocols 15th April, 2006
Kernel Synchronization II
Condition Variables and Producer/Consumer
Chapter 26 Concurrency and Thread
The C++ Memory model Implementing synchronization)
Implementing synchronization
Condition Variables and Producer/Consumer
Synchronization Issues
More on Thread Safety CSE451 Andrew Whitaker.
Module 7a: Classic Synchronization
Threads and Memory Models Hal Perkins Autumn 2009
Shared Memory Programming
Critical section problem
Grades.
Java Concurrency 17-Jan-19.
Shared Memory Consistency Models: A Tutorial
Chapter 6: Process Synchronization
Java Concurrency.
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Java Concurrency.
Relaxed Consistency Finale
Lecture 8 Thread Safety.
Foundations and Definitions
Programming with Shared Memory Specifying parallelism
Java Concurrency 29-May-19.
CSE 332: Concurrency and Locks
Threads CSE451 Andrew Whitaker TODO: print handouts for AspectRatio.
CPS110: Thread cooperation
Presentation transcript:

Problems with Locks Andrew Whitaker CSE451

Introduction Locks are hard to use correctly Incorrect use can lead to safety, liveness, performance problems Locks can’t always be used Interrupt handlers Locks lead to poor software modularity…

Software Engineering Conundrum public interface ThreadSafeHashTable { public void insert(Object key, Object value); public void delete (Object key); } No good way to atomically move an entry between hash tables Impossible if locking is done internally Possible if locking is done on the hash table But, this violates modularity

Potential Ways to Avoid Locking Cheat Omit locks when it is “obviously safe” to do so Non-blocking algorithms Transactional Memory (research!)

A (Seemingly) Simple Example public class VisiblityExample extends Thread { private static int x = 1; private static int y = 1; private static boolean ready = false; public static void main(String[] args) { Thread t = new VisiblityExample(); t.start(); // initialize some stuff… x = 2; y = 2; ready = true; } public void run() { while (! ready) Thread.yield(); // give up the processor System.out.println(“x= “ + x + “ y= “ + y);

Answer It’s a race condition. Many different outputs are possible: x=2, y=2 x=1,y=2 x=2,y=1 x=1,y=1 Or, the program may print nothing! The ready loop runs forever

What’s Going on Here? Processor caches ($) can get out-of-sync CPU CPU Memory

A Mental Model Every thread/processor has its own copy of every variable Yikes! // Not real code; for illustration purposes only public class Example extends Thread { private static final int NUM_PROCESSORS = 4; private static int x[NUM_PROCESSORS]; private static int y[NUM_PROCESSORS]; private static boolean ready[NUM_PROCESSORS]; // …

Simplified View of Cache Consistency Strategies Relaxed Java lives up here Amount of reordering Sequential Fast and scalable

Sequential Consistency All processors agree on a total order of memory accesses Reads and writes are propagated “immediately” Behaves like shuffling cards “Simple but slow”

Why Relaxed Consistency Models? Hardware perspective: consistency operations are expensive Writing processor must invalidate all other processors Reading processor must re-validate its cached state Compiler perspective: optimizations frequently re-arrange memory operations to hide latency These are guaranteed to be transparent, but only on a single processor

Relaxed Consistency Models Better performance Updates are published lazily But, incomprehensible programming model

Hardware Support: Memory Fences (Barriers) Limits the amount of reordering in the system Memory operations cannot be moved across a fence Several variants: Write fences Read fences Read/write (total) fences

Release Consistency Observation: concurrent programs usually use proper synchronization “All shared, mutable state must be properly synchronized” Thus, it suffices to sync-up memory during synchronized operations Big performance win: the number of cache coherency operations scales with synchronization, not the number of loads and stores

Simple Example Within the critical section, updates can be re-ordered Fetch current values synchronized (this) { x++; y++; } Publish new values Within the critical section, updates can be re-ordered Without publication, updates may never be visible

Java Volatile Variables Java synchronized does double-duty It provides mutual exclusion, atomicity It ensures safe publication of updates Volatile variables provide safe publication without mutual exclusion volatile int x = 7;

More on Volatile Updates to volatile fields are propagated immediately “Don’t cache me!” Effectively, this activates sequential consistency Volatile serves as a fence to the compiler and hardware Memory operations are not re-ordered around a volatile

Rule #1, Revised All shared, mutable state must be properly synchronized With a synchronized statement, an Atomic variable, or with volatile