Atomicity CS 2110 – Fall 2017.

Slides:



Advertisements
Similar presentations
50.003: Elements of Software Construction Week 6 Thread Safety and Synchronization.
Advertisements

CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Concurrency 101 Shared state. Part 1: General Concepts 2.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Chapter 6 Process Synchronization: Part 2. Problems with Semaphores Correct use of semaphore operations may not be easy: –Suppose semaphore variable called.
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
“THREADS CANNOT BE IMPLEMENTED AS A LIBRARY” HANS-J. BOEHM, HP LABS Presented by Seema Saijpaul CS-510.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Introduction to Synchronization CS-3013 A-term Introduction to Synchronization CS-3013 Operating Systems (Slides include materials from Modern Operating.
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Concurrency, Mutual Exclusion and Synchronization.
Multiprocessor Cache Consistency (or, what does volatile mean?) Andrew Whitaker CSE451.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CS 2200 Presentation 18b MUTEX. Questions? Our Road Map Processor Networking Parallel Systems I/O Subsystem Memory Hierarchy.
Operating System Concepts and Techniques Lecture 13 Interprocess communication-2 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Java Thread Programming
A brief intro to: Parallelism, Threads, and Concurrency
Concurrency 2 CS 2110 – Spring 2016.
Concurrency 3 CS 2110 – Spring 2017.
CS703 – Advanced Operating Systems
Process Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
Threads, Concurrency, and Parallelism
Chapter 5: Process Synchronization
Advanced Operating Systems - Fall 2009 Lecture 8 – Wednesday February 4, 2009 Dan C. Marinescu Office: HEC 439 B. Office hours: M,
Lecture 5: GPU Compute Architecture
Threads and Memory Models Hal Perkins Autumn 2011
Lecture 5: GPU Compute Architecture for the last time
Synchronization Lecture 23 – Fall 2017.
null, true, and false are also reserved.
Designing Parallel Algorithms (Synchronization)
Synchronization Lecture 24 – Fall 2018.
Topic 6 (Textbook - Chapter 5) Process Synchronization
Module 7a: Classic Synchronization
Threads and Memory Models Hal Perkins Autumn 2009
Critical section problem
Synchronization Lecture 24 – Fall 2018.
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Introduction to Synchronization
CSCI1600: Embedded and Real Time Software
Kernel Synchronization II
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Synchronization Lecture 24 – Fall 2018.
CSCI1600: Embedded and Real Time Software
Problems with Locks Andrew Whitaker CSE451.
Controlled Interleaving for Transactions
Threads CSE451 Andrew Whitaker TODO: print handouts for AspectRatio.
Synchronization Lecture 24 – Fall 2018.
Presentation transcript:

Atomicity CS 2110 – Fall 2017

Parallel Programming Thus Far Parallel programs can be faster and more efficient Problem: race conditions Solution: synchronization Are there more efficient ways to ensure the correctness of parallel programs? Answer: depends on what you are trying to do

Selling Widgets public class WidgetStore{ private int numWidgets; /** produce widgets */ public void produce(){…} /** sell all the widgets */ public void sell(){…} /** diplay widget if there /* are any available */ public void display(){…} } sell() display() display() might continue displaying widgets after all the widgets are sold!!!

Caching Data is stored in caches: small, fast storage units Only written to main memory occasionally Huge efficiency gains! Each CPU has its own cache Each thread maintains its own cache entries Huge concurrency headaches!

keyword volatile public class WidgetStore{ private volatile int numWidgets; /** produce widgets */ public void produce(){…} /** sell all the widgets */ public void sell(){…} /** diplay widget if there /* are any available */ public void display(){…} } Variables declared as volatile will not be stored in the cache. All writes will write directly to main memory. All reads will read directly from main memory. Note: actually get serialization, but let's not worry about that.

Handling Writes What is the value of x? Can be either -1, 0, or 1! int numWidgets = 0; numWidgets++; Thread 1(produce) numWidgets--; Thread 2 (sell) What is the value of x? Can be either -1, 0, or 1!

Handling Writes What is the value of x? Can be either -1, 0, or 1! volatile int numWidgets = 0; numWidgets++; Thread 1(produce) numWidgets--; Thread 2 (sell) Volatility prevents data from being out of sync for a long time because of caching If one thread only reads the value, volatile would be necessary (to guarantee you have the latest value) and sufficient. Doesn't guarantee atomicity, so still have problems with writes What is the value of x? Can be either -1, 0, or 1!

The Problem with Writes… Thread 1 Thread 2 Initially, i = 0 tmp = load i; Load 0 from memory Load 0 from memory tmp = load i; tmp = tmp + 1; store tmp to i; Store 1 to memory tmp = tmp - 1; store tmp to i; Store 1 to memory time Finally, i = -1

Concurrent Writes Solution 1: synchronized It works But locks can be slow Solution 2: atomic values Less powerful More efficient private int numWidgets; public void produce(){ ... synchronized(this){ numWidgets++; } private AtomicInteger numWidgets; public void produce(){ ... synchronized(this){ numWidgets++; }

Atomic Values Package java.util.concurrent.atomic defines a toolkit of classes that implement atomic values atomic values support lock-free, thread-safe programming on single variables class AtomicInteger, AtomicReference<E>, … Atomic values extend the idea of volatile method get(): reads current value like volatile method set(newValue): writes value like volatile implements new atomic operations

Compare and Set (CAS) boolean compareAndSet(expectedValue, newValue) If value doesn’t equal expectedValue, return false if equal, store newValue in value and return true executes as a single atomic action! supported by many processors – as hardware instructions does not use locks! AtomicInteger n = new AtomicInteger(5); n.compareAndSet(3, 6); // return false – no change n.compareAndSet(5, 7); // returns true – now is 7

Incrementing with CAS /** Increment n by one. Other threads use n too. */ public static void increment(AtomicInteger n) { int i = n.get(); while (!n.compareAndSet(i, i+1)) { i = n.get(); } // AtomicInteger has increment methods that do this public int incrementAndGet() public int addAndGet(int delta) public int updateAndGet(InUnaryOperator updateFunction)

Locks with CAS public class WidgetStore{ private int numWidgets; /** produce widgets */ public synchronized void produce(){…} } public class WidgetStore{ private int numWidgets; private boolean lock; /** produce widgets */ public synchronized void produce(){ while(!lock.compareAndSet(false, true)){} … lock = false; } spinlock, no locklist, waitlist

Lock-Free Data Structures Usable by many concurrent threads using only atomic actions – no locks! compare and swap is your best friend but it only atomically updates one variable at a time! Let’s look at one! Lock-free binary search tree [Ellen et al., 2010] http://www.cs.vu.nl//~tcs/cm/cds/ellen.pdf

More Concurrency Concurrency is actually an OS-level concern Different platforms have different concurrency APIs Programming languages provide abstractions There are lots of techniques for write concurrent programs lock (e.g., synchronized), mutex atomic operations semaphores condition variables transactional memory