Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Multicore Programming Skip list Tutorial 10 CS Spring 2010.
Scalable Flat-Combining Based Synchronous Queues Danny Hendler, Itai Incze, Nir Shavit and Moran Tzafrir Presentation by Uri Golani.
1 Synchronization A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast. Types of Synchronization.
7. Optimization Prof. O. Nierstrasz Lecture notes by Marcus Denker.
Synchronization. How to synchronize processes? – Need to protect access to shared data to avoid problems like race conditions – Typical example: Updating.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Chapter 6: Process Synchronization
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Barrier Synchronization Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Introduction Companion slides for
Spin Locks and Contention Based on slides by by Maurice Herlihy & Nir Shavit Tomer Gurevich.
Barrier Synchronization Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
ESE Einführung in Software Engineering X. CHAPTER Prof. O. Nierstrasz Wintersemester 2005 / 2006.
Scalable Reader Writer Synchronization John M.Mellor-Crummey, Michael L.Scott.
Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for.
Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Concurrent Skip Lists.
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Two-Process Systems TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A AA Companion slides for Distributed Computing.
Synchronization (Barriers) Parallel Processing (CS453)
Multicore Programming
1 Joe Meehean.  Problem arrange comparable items in list into sorted order  Most sorting algorithms involve comparing item values  We assume items.
The Performance of Spin Lock Alternatives for Shared-Memory Multiprocessors THOMAS E. ANDERSON Presented by Daesung Park.
Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Semaphores, Locks and Monitors By Samah Ibrahim And Dena Missak.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Algorithms and data structures Protected by
Self Stabilizing Smoothing and Counting Maurice Herlihy, Brown University Srikanta Tirthapura, Iowa State University.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
Concurrent Queues Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Programming Language Basics Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Multiprocessor Architecture Basics Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Futures, Scheduling, and Work Distribution Speaker: Eliran Shmila Based on chapter 16 from the book “The art of multiprocessor programming” by Maurice.
SYNAR Systems Networking and Architecture Group CMPT 886: The Art of Scalable Synchronization Dr. Alexandra Fedorova School of Computing Science SFU.
Concurrent Stacks Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Concurrent Skip Lists.
Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Counting and Distributed Coordination BASED ON CHAPTER 12 IN «THE ART OF MULTIPROCESSOR PROGRAMMING» LECTURE BY: SAMUEL AMAR.
Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Spin Locks and Contention Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Spin Locks and Contention Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Queue Locks and Local Spinning Some Slides based on: The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
The Relative Power of Synchronization Operations Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Concurrent Skip Lists.
Background on the need for Synchronization
Concurrent Objects Companion slides for
Shared Counters and Parallelism
Shared Counters and Parallelism
Ivy Eva Wu.
Counting, Sorting, and Distributed Coordination
Simulations and Reductions
Barrier Synchronization
Barrier Synchronization
Shared Counters and Parallelism
Pools, Counters, and Parallelism
CSC 143 Binary Search Trees.
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Shared Counters and Parallelism
Shared Counters and Parallelism
Presentation transcript:

Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit

Art of Multiprocessor Programming 2 A Shared Pool Put –Inserts object –blocks if full Remove –Removes & returns an object –blocks if empty public interface Pool { public void put(Object x); public Object remove(); } Unordered set of objects

Art of Multiprocessor Programming 3 put Simple Locking Implementation put

Art of Multiprocessor Programming 4 put Simple Locking Implementation put Problem: hot- spot contention

Art of Multiprocessor Programming 5 put Simple Locking Implementation Problem: hot- spot contention Problem: sequential bottleneck put

Art of Multiprocessor Programming 6 put Simple Locking Implementation Problem: hot- spot contention Problem: sequential bottleneck put Solution: Queue Lock

Art of Multiprocessor Programming 7 put Simple Locking Implementation Problem: hot- spot contention Problem: sequential bottleneck put Solution: Queue Lock Solution ???

Art of Multiprocessor Programming 8 Counting Implementation remove put

Art of Multiprocessor Programming 9 Counting Implementation Only the counters are sequential remove put

Art of Multiprocessor Programming 10 Shared Counter

Art of Multiprocessor Programming 11 Shared Counter No duplication

Art of Multiprocessor Programming 12 Shared Counter No duplication No Omission

Art of Multiprocessor Programming 13 Shared Counter Not necessarily linearizable No duplication No Omission

Art of Multiprocessor Programming 14 Shared Counters Can we build a shared counter with –Low memory contention, and –Real parallelism? Locking –Can use queue locks to reduce contention –No help with parallelism issue …

Art of Multiprocessor Programming 15 Software Combining Tree 4 Contention: All spinning local Parallelism: Potential n/log n speedup

Art of Multiprocessor Programming 16 Combining Trees 0

Art of Multiprocessor Programming 17 Combining Trees 0 +3

Art of Multiprocessor Programming 18 Combining Trees

Art of Multiprocessor Programming 19 Combining Trees Two threads meet, combine sums

Art of Multiprocessor Programming 20 Combining Trees Two threads meet, combine sums +5

Art of Multiprocessor Programming 21 Combining Trees Combined sum added to root

Art of Multiprocessor Programming 22 Combining Trees Result returned to children

Art of Multiprocessor Programming 23 Combining Trees Results returned to threads

Art of Multiprocessor Programming 24 Devil in the Details What if –threads don’t arrive at the same time? Wait for a partner to show up? –How long to wait? –Waiting times add up … Instead –Use multi-phase algorithm –Try to wait in parallel …

Art of Multiprocessor Programming 25 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT};

Art of Multiprocessor Programming 26 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Nothing going on

Art of Multiprocessor Programming 27 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 1 st thread ISO partner for combining, will return soon to check for 2 nd thread

Art of Multiprocessor Programming 28 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 2 nd thread arrived with value for combining

Art of Multiprocessor Programming 29 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 1 st thread has completed operation & deposited result for 2 nd thread

Art of Multiprocessor Programming 30 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Special case: root node

Art of Multiprocessor Programming 31 Node Synchronization Short-term –Synchronized methods –Consistency during method call Long-term –Boolean locked field –Consistency across calls

Art of Multiprocessor Programming 32 Phases Precombining –Set up combining rendez-vous Combining –Collect and combine operations Operation –Hand off to higher thread Distribution –Distribute results to waiting threads

Art of Multiprocessor Programming 33 Precombining Phase 0 Examine status IDLE

Art of Multiprocessor Programming 34 Precombining Phase 0 0 If IDLE, promise to return to look for partner FIRST

Art of Multiprocessor Programming 35 Precombining Phase 0 At ROOT, turn back FIRST

Art of Multiprocessor Programming 36 Precombining Phase 0 FIRST

Art of Multiprocessor Programming 37 Precombining Phase 0 0 SECOND If FIRST, I’m willing to combine, but lock for now

Art of Multiprocessor Programming 38 Code Tree class –In charge of navigation Node class –Combining state –Synchronization state –Bookkeeping

Art of Multiprocessor Programming 39 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node;

Art of Multiprocessor Programming 40 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Start at leaf

Art of Multiprocessor Programming 41 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Move up while instructed to do so

Art of Multiprocessor Programming 42 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Remember where we stopped

Art of Multiprocessor Programming 43 Precombining Node synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() }

Art of Multiprocessor Programming 44 synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Precombining Node Short-term synchronization

Art of Multiprocessor Programming 45 synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Synchronization Wait while node is locked

Art of Multiprocessor Programming 46 synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Precombining Node Check combining status

Art of Multiprocessor Programming 47 Node was IDLE synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } I will return to look for combining value

Art of Multiprocessor Programming 48 Precombining Node synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Continue up the tree

Art of Multiprocessor Programming 49 I’m the 2 nd Thread synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } If 1 st thread has promised to return, lock node so it won’t leave without me

Art of Multiprocessor Programming 50 Precombining Node synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Prepare to deposit 2 nd value

Art of Multiprocessor Programming 51 Precombining Node synchronized boolean phase1() { while (sStatus==SStatus.BUSY) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } End of phase 1, don’t continue up tree

Art of Multiprocessor Programming 52 Node is the Root synchronized boolean phase1() { while (sStatus==SStatus.BUSY) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } If root, phase 1 ends, don’t continue up tree

Art of Multiprocessor Programming 53 Precombining Node synchronized boolean phase1() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Always check for unexpected values!

Art of Multiprocessor Programming 54 Combining Phase 0 0 SECOND 1 st thread locked out until 2 nd provides value +3

Art of Multiprocessor Programming 55 Combining Phase 0 0 SECOND 2 nd thread deposits value to be combined, unlocks node, & waits … 2 +3 zzz

Art of Multiprocessor Programming 56 Combining Phase SECOND st thread moves up the tree with combined value … zzz

Art of Multiprocessor Programming 57 Combining (reloaded) nd thread has not yet deposited value … FIRST

Art of Multiprocessor Programming 58 Combining (reloaded) 0 +3 FIRST 1 st thread is alone, locks out late partner

Art of Multiprocessor Programming 59 Combining (reloaded) 0 +3 FIRST Stop at root

Art of Multiprocessor Programming 60 Combining (reloaded) 0 +3 FIRST 2 nd thread’s phase 1 visit locked out

Art of Multiprocessor Programming 61 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; }

Art of Multiprocessor Programming 62 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Start at leaf

Art of Multiprocessor Programming 63 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Add 1

Art of Multiprocessor Programming 64 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Revisit nodes visited in phase 1

Art of Multiprocessor Programming 65 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Accumulate combined values, if any

Art of Multiprocessor Programming 66 node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Combining Navigation We will retraverse path in reverse order …

Art of Multiprocessor Programming 67 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Move up the tree

Art of Multiprocessor Programming 68 Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … }

Art of Multiprocessor Programming 69 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Wait until node is unlocked

Art of Multiprocessor Programming 70 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Lock out late attempts to combine

Art of Multiprocessor Programming 71 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Remember our contribution

Art of Multiprocessor Programming 72 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Check status

Art of Multiprocessor Programming 73 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node 1 st thread is alone

Art of Multiprocessor Programming 74 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Node Combine with 2 nd thread

Art of Multiprocessor Programming 75 Operation Phase Add combined value to root, start back down (phase 4) zzz

Art of Multiprocessor Programming 76 Operation Phase (reloaded) 5 Leave value to be combined … SECOND 2

Art of Multiprocessor Programming 77 Operation Phase (reloaded) 5 +2 Unlock, and wait … SECOND 2 zzz

Art of Multiprocessor Programming 78 Operation Phase Navigation prior = stop.op(combined);

Art of Multiprocessor Programming 79 Operation Phase Navigation prior = stop.op(combined); Get result of combining

Art of Multiprocessor Programming 80 Operation Phase Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: …

Art of Multiprocessor Programming 81 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … At Root Add sum to root, return prior value

Art of Multiprocessor Programming 82 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Deposit value for later combining …

Art of Multiprocessor Programming 83 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Unlock node, notify 1 st thread

Art of Multiprocessor Programming 84 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Wait for 1 st thread to deliver results

Art of Multiprocessor Programming 85 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Unlock node & return

Art of Multiprocessor Programming 86 Distribution Phase 5 0 zzz Move down with result SECOND

Art of Multiprocessor Programming 87 Distribution Phase 5 zzz Leave result for 2 nd thread & lock node SECOND 2

Art of Multiprocessor Programming 88 Distribution Phase 5 0 zzz Move result back down tree SECOND 2

Art of Multiprocessor Programming 89 Distribution Phase 5 2 nd thread awakens, unlocks, takes value IDLE 3

Art of Multiprocessor Programming 90 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior;

Art of Multiprocessor Programming 91 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Traverse path in reverse order

Art of Multiprocessor Programming 92 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Distribute results to waiting 2 nd threads

Art of Multiprocessor Programming 93 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Return result to caller

Art of Multiprocessor Programming 94 Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); return; default: …

Art of Multiprocessor Programming 95 Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); return; default: … No combining, unlock node & reset

Art of Multiprocessor Programming 96 Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); return; default: … Notify 2 nd thread that result is available

Art of Multiprocessor Programming 97 Bad News: High Latency Log n

Art of Multiprocessor Programming 98 Good News: Real Parallelism threads 1 thread

Art of Multiprocessor Programming 99 Throughput Puzzles Ideal circumstances –All n threads move together, combine –n increments in O(log n) time Worst circumstances –All n threads slightly skewed, locked out –n increments in O(n · log n) time

Art of Multiprocessor Programming 100 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }}

Art of Multiprocessor Programming 101 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} How many iterations

Art of Multiprocessor Programming 102 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Expected time between incrementing counter

Art of Multiprocessor Programming 103 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Take a number

Art of Multiprocessor Programming 104 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Pretend to work (more work, less concurrency)

Art of Multiprocessor Programming 105 Performance Benchmarks Alewife –NUMA architecture –Simulated Throughput: –average number of inc operations in 1 million cycle period. Latency: –average number of simulator cycles per inc operation.

Art of Multiprocessor Programming 106 Performance Latency: Throughput: Splock Ctree[n] num of processors cycles per operation Splock Ctree[n] operations per million cycles num of processors work = 0

Art of Multiprocessor Programming 107 Performance Latency: Throughput: Splock Ctree[n] num of processors cycles per operation Splock Ctree[n] operations per million cycles num of processors work = 0

Art of Multiprocessor Programming 108 The Combining Paradigm Implements any RMW operation When tree is loaded –Takes 2 log n steps –for n requests Very sensitive to load fluctuations: –if the arrival rates drop –the combining rates drop –overall performance deteriorates!

Art of Multiprocessor Programming 109 Combining Load Sensitivity Notice Load Fluctuations Throughput processors

Art of Multiprocessor Programming 110 Combining Rate vs Work

Art of Multiprocessor Programming 111 Better to Wait Longer Short wait Indefinite wait Medium wait Throughput processors

Art of Multiprocessor Programming 112 Conclusions Combining Trees –Work well under high contention –Sensitive to load fluctuations –Can be used for getAndMumble() ops Next –Counting networks –A different approach …

Art of Multiprocessor Programming 113 A Balancer Input wires Output wires

Art of Multiprocessor Programming 114 Tokens Traverse Balancers Token i enters on any wire leaves on wire i mod (fan-out)

Art of Multiprocessor Programming 115 Tokens Traverse Balancers

Art of Multiprocessor Programming 116 Tokens Traverse Balancers

Art of Multiprocessor Programming 117 Tokens Traverse Balancers

Art of Multiprocessor Programming 118 Tokens Traverse Balancers

Art of Multiprocessor Programming 119 Tokens Traverse Balancers Arbitrary input distribution Balanced output distribution

Art of Multiprocessor Programming 120 Smoothing Network k-smooth property

Art of Multiprocessor Programming 121 Counting Network step property

Art of Multiprocessor Programming 122 Counting Networks Count! 0, 4, , 5, , 6, ,

Art of Multiprocessor Programming 123 Bitonic[4]

Art of Multiprocessor Programming 124 Bitonic[4]

Art of Multiprocessor Programming 125 Bitonic[4]

Art of Multiprocessor Programming 126 Bitonic[4]

Art of Multiprocessor Programming 127 Bitonic[4]

Art of Multiprocessor Programming 128 Bitonic[4]

Art of Multiprocessor Programming 129 Counting Networks Good for counting number of tokens low contention no sequential bottleneck high throughput practical networks depth

Art of Multiprocessor Programming 130 Bitonic[k] is not Linearizable

Art of Multiprocessor Programming 131 Bitonic[k] is not Linearizable

Art of Multiprocessor Programming 132 Bitonic[k] is not Linearizable 2

Art of Multiprocessor Programming 133 Bitonic[k] is not Linearizable 2 0

Art of Multiprocessor Programming 134 Bitonic[k] is not Linearizable 2 0 Problem is: Red finished before Yellow started Red took 2 Yellow took 0

Art of Multiprocessor Programming 135 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; }

Art of Multiprocessor Programming 136 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } state

Art of Multiprocessor Programming 137 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Output connections to balancers

Art of Multiprocessor Programming 138 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Get-and-complement

Art of Multiprocessor Programming 139 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; }

Art of Multiprocessor Programming 140 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Stop when we get to the end

Art of Multiprocessor Programming 141 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Flip state

Art of Multiprocessor Programming 142 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Exit on wire

Art of Multiprocessor Programming 143 Alternative Implementation: Message-Passing

Art of Multiprocessor Programming 144 Bitonic[2k] Schematic Bitonic[k] Merger[2k]

Art of Multiprocessor Programming 145 Bitonic[2k] Layout

Art of Multiprocessor Programming 146 Unfolded Bitonic Network

Art of Multiprocessor Programming 147 Unfolded Bitonic Network Merger[8]

Art of Multiprocessor Programming 148 Unfolded Bitonic Network

Art of Multiprocessor Programming 149 Unfolded Bitonic Network Merger[4]

Art of Multiprocessor Programming 150 Unfolded Bitonic Network

Art of Multiprocessor Programming 151 Unfolded Bitonic Network Merger[2]

Art of Multiprocessor Programming 152 Bitonic[k] Depth Width k Depth is (log 2 k)(log 2 k + 1)/2

Art of Multiprocessor Programming 153 Merger[2k]

Art of Multiprocessor Programming 154 Merger[2k] Schematic Merger[k]

Art of Multiprocessor Programming 155 Merger[2k] Layout

Art of Multiprocessor Programming 156 Lemma If a sequence has the step property …

Art of Multiprocessor Programming 157 Lemma So does its even subsequence

Art of Multiprocessor Programming 158 Lemma And its odd subsequence

Art of Multiprocessor Programming 159 Merger[2k] Schematic Merger[k] Bitonic[k] even odd even

Art of Multiprocessor Programming 160 Proof Outline Outputs from Bitonic[k] Inputs to Merger[k] even odd even

Art of Multiprocessor Programming 161 Proof Outline Inputs to Merger[k] even odd even Outputs of Merger[k]

Art of Multiprocessor Programming 162 Proof Outline Outputs of Merger[k]Outputs of last layer

Art of Multiprocessor Programming 163 Periodic Network Block

Art of Multiprocessor Programming 164 Periodic Network Block

Art of Multiprocessor Programming 165 Periodic Network Block

Art of Multiprocessor Programming 166 Periodic Network Block

Art of Multiprocessor Programming 167 Block[2k] Schematic Block[k]

Art of Multiprocessor Programming 168 Block[2k] Layout

Art of Multiprocessor Programming 169 Periodic[8]

Art of Multiprocessor Programming 170 Network Depth Each block[k] has depth log 2 k Need log 2 k blocks Grand total of (log 2 k) 2

Art of Multiprocessor Programming 171 Lower Bound on Depth Theorem: The depth of any width w counting network is at least Ω(log w). Theorem: there exists a counting network of θ(log w) depth. Unfortunately, proof is non-constructive and constants in the 1000s.

Art of Multiprocessor Programming 172 Sequential Theorem If a balancing network counts –Sequentially, meaning that –Tokens traverse one at a time Then it counts –Even if tokens traverse concurrently

Art of Multiprocessor Programming 173 Red First, Blue Second (2)

Art of Multiprocessor Programming 174 Blue First, Red Second (2)

Art of Multiprocessor Programming 175 Either Way Same balancer states

Art of Multiprocessor Programming 176 Order Doesn’t Matter Same balancer states Same output distribution

Art of Multiprocessor Programming 177 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i = 0 < iters) { i = fetch&inc(); Thread.sleep(random() % work); }

Art of Multiprocessor Programming 178 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput Higher is better!

Art of Multiprocessor Programming 179 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput 64-leaf combining tree 80-balancer counting network Higher is better!

Art of Multiprocessor Programming 180 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput 64-leaf combining tree 80-balancer counting network Combining and counting are pretty close

Art of Multiprocessor Programming 181 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput 64-leaf combining tree 80-balancer counting network But they beat the hell out of the competition!

Art of Multiprocessor Programming 182 Saturation and Performance Undersaturated P < w log w Saturated P = w log w Oversaturated P > w log w Optimal performance

Art of Multiprocessor Programming 183 Throughput vs. Size Bitonic[16] Bitonic[4] Bitonic[8] Number processors Throughput

Art of Multiprocessor Programming 184 Shared Pool Put counting network Remove counting network

Art of Multiprocessor Programming 185 Put/Remove Network Guarantees never: –Put waiting for item, while –Get has deposited item Otherwise OK to wait –Put delayed while pool slot is full –Get delayed while pool slot is empty

Art of Multiprocessor Programming 186 What About Decrements Adding arbitrary values Other operations –Multiplication –Vector addition –Horoscope casting …

Art of Multiprocessor Programming 187 First Step Can we decrement as well as increment? What goes up, must come down …

Art of Multiprocessor Programming 188 Anti-Tokens

Art of Multiprocessor Programming 189 Tokens & Anti-Tokens Cancel

Art of Multiprocessor Programming 190 Tokens & Anti-Tokens Cancel

Art of Multiprocessor Programming 191 Tokens & Anti-Tokens Cancel 

Art of Multiprocessor Programming 192 Tokens & Anti-Tokens Cancel As if nothing happened

Art of Multiprocessor Programming 193 Tokens vs Antitokens Tokens –read balancer –flip –proceed Antitokens –flip balancer –read –proceed

Art of Multiprocessor Programming 194 Pumping Lemma Keep pumping tokens through one wire Eventually, after Ω tokens, network repeats a state

Art of Multiprocessor Programming 195 Anti-Token Effect token anti-token

Art of Multiprocessor Programming 196 Observation Each anti-token on wire i –Has same effect as Ω-1 tokens on wire i –So network still in legal state Moreover, network width w divides Ω –So Ω-1 tokens

Art of Multiprocessor Programming 197 Before Antitoken

Art of Multiprocessor Programming 198 Balancer states as if … Ω-1 Ω-1 is one brick shy of a load

Art of Multiprocessor Programming 199 Post Antitoken Next token shows up here

Art of Multiprocessor Programming 200 Implication Counting networks with –Tokens (+1) –Anti-tokens (-1) Give –Highly concurrent –Low contention getAndIncrement + getAndDecrement methods QED

Art of Multiprocessor Programming 201 Adding Networks Combining trees implement –Fetch&add –Add any number, not just 1 What about counting networks?

Art of Multiprocessor Programming 202 Fetch-and-add Beyond getAndIncrement + getAndDecrement What about getAndAdd(x)? –Atomically returns prior value –And adds x to value? Not to mention –getAndMultiply –getAndFourierTransform?

Art of Multiprocessor Programming 203 Bad News If an adding network –Supports n concurrent tokens Then every token must traverse –At least n-1 balancers –In sequential executions

Art of Multiprocessor Programming 204 Uh-Oh Adding network size depends on n –Like combining trees –Unlike counting networks High latency –Depth linear in n –Not logarithmic in w

Art of Multiprocessor Programming 205 Generic Counting Network

Art of Multiprocessor Programming 206 First Token First token would visit green balancers if it runs solo

Art of Multiprocessor Programming 207 Claim Look at path of +1 token All other +2 tokens must visit some balancer on +1 token’s path

Art of Multiprocessor Programming 208 Second Token +1 Takes 0 +2

Art of Multiprocessor Programming 209 Second Token Takes Takes 0 They can’t both take zero! +2

Art of Multiprocessor Programming 210 If Second avoids First’s Path Second token –Doesn’t observe first –First hasn’t run –Chooses 0 First token –Doesn’t observe second –Disjoint paths –Chooses 0

Art of Multiprocessor Programming 211 If Second avoids First’s Path Because +1 token chooses 0 –It must be ordered first –So +2 token ordered second –So +2 token should return 1 Something’s wrong!

Art of Multiprocessor Programming 212 Second Token Halt blue token before first green balancer +2

Art of Multiprocessor Programming 213 Third Token +1 Takes 0 or 2 +2

Art of Multiprocessor Programming 214 Third Token Takes 0 +2 Takes 0 or 2 They can’t both take zero, and they can’t take 0 and 2!

Art of Multiprocessor Programming 215 First,Second, & Third Tokens must be Ordered Third (+2) token –Did not observe +1 token –May have observed earlier +2 token –Takes an even number

Art of Multiprocessor Programming 216 First,Second, & Third Tokens must be Ordered Because +1 token’s path is disjoint –It chooses 0 –Ordered first –Rest take odd numbers But last token takes an even number Something’s wrong!

Art of Multiprocessor Programming 217 Third Token Halt blue token before first green balancer

Art of Multiprocessor Programming 218 Continuing in this way We can “park” a token –In front of a balancer –That token #1 will visit There are n-1 other tokens –Two wires per balancer –Path includes n-1 balancers!

Art of Multiprocessor Programming 219 Theorem In any adding network –In sequential executions –Tokens traverse at least n-1 balancers Same arguments apply to –Linearizable counting networks –Multiplying networks –And others

Art of Multiprocessor Programming 220 Clip Art

Art of Multiprocessor Programming 221 This work is licensed under a Creative Commons Attribution- ShareAlike 2.5 License.Creative Commons Attribution- ShareAlike 2.5 License You are free: –to Share — to copy, distribute and transmit the work –to Remix — to adapt the work Under the following conditions: –Attribution. You must attribute the work to “The Art of Multiprocessor Programming” (but not in any way that suggests that the authors endorse you or your use of the work). –Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to – Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author's moral rights.