Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shared Counters and Parallelism

Similar presentations


Presentation on theme: "Shared Counters and Parallelism"— Presentation transcript:

1 Shared Counters and Parallelism
Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit

2 A Shared Pool Unordered set of objects Put Remove Inserts object
public interface Pool { public void put(Object x); public Object remove(); } Unordered set of objects Put Inserts object blocks if full Remove Removes & returns an object blocks if empty A pool is a set of objects. The pool interface provides Put() and Remove() methods that respectively insert and remove items from the pool. Familiar classes such as stacks and queues can be viewed as pools that provide additional ordering guarantees. Art of Multiprocessor Programming

3 Simple Locking Implementation
put One natural way to implement a shared pool is to have each method lock the pool, perhaps by making both Put() and Remove() synchronized methods. put Art of Multiprocessor Programming

4 Simple Locking Implementation
put Problem: hot-spot contention The problem, of course, is that such an approach is too heavy-handed, because the lock itself would become a hot-spop. We would prefer an implementation in which Pool methods could work in parallel. put Art of Multiprocessor Programming

5 Simple Locking Implementation
put Problem: hot-spot contention A second problem is whether it is even possible to do counting in parallel. put Problem: sequential bottleneck Art of Multiprocessor Programming

6 Simple Locking Implementation
Solution: Queue Lock put Problem: hot-spot contention The memory contention problem can be solved by using a scalable lock, such as one of the queue locks studied earlier. put Problem: sequential bottleneck Art of Multiprocessor Programming

7 Simple Locking Implementation
Solution: Queue Lock put Problem: hot-spot contention We are still faced with a much harder problem: can we parallelize counting, is is counting inherently sequential? put Problem: sequential bottleneck Solution??? Art of Multiprocessor Programming

8 Counting Implementation
put 19 20 21 19 20 21 remove Consider the following alternative. The pool itself is an array of items, where each array element either contains an item, or is \Jnull. We route threads through two GetAndIncrement() counters. Threads calling Put() increment one counter to choose an array index into which the new item should be placed. (If that slot is full, the thread waits until it becomes empty.) Similarly, threads calling Remove() increment another counter to choose an array index from which the new item should be removed. (If that slot is empty, the thread waits until it becomes full.) Art of Multiprocessor Programming

9 Counting Implementation
put 19 20 21 19 20 21 remove This approach replaces one bottleneck, the lock, with two, the counters. Naturally, two bottlenecks are better than one (think about that claim for a second). But are shared counters necessarily bottlenecks? Can we implement highly-concurrent shared counters? Only the counters are sequential Art of Multiprocessor Programming

10 Art of Multiprocessor Programming
Shared Counter 3 2 1 1 2 3 The problem of building a concurrent pool reduces to the problem of building a concurrent counter. But what do we mean by a couner? Art of Multiprocessor Programming

11 Art of Multiprocessor Programming
Shared Counter No duplication 3 2 1 1 2 3 Well, first, we want to be sure that two threads never take the same value. Art of Multiprocessor Programming

12 Art of Multiprocessor Programming
Shared Counter No duplication No Omission 3 2 1 1 2 3 Second, we want to make sure that the threads do not collectively skip a value. Art of Multiprocessor Programming

13 Shared Counter No duplication No Omission Not necessarily linearizable
3 2 1 1 2 3 Finally, we will not insist that counters be linarizable. Later on, we will see that linearizable counters are much more expensive than non-linearizable counters. You get what you pay for, and you pay for what you get. Not necessarily linearizable Art of Multiprocessor Programming

14 Art of Multiprocessor Programming
Shared Counters Can we build a shared counter with Low memory contention, and Real parallelism? Locking Can use queue locks to reduce contention No help with parallelism issue … We face two challenges. Can we avoid memory contention, where too many threads try to access the same memory location, stressing the underlying communication network and cache coherence protocols? Can we achieve real parallelism? Is incrementing a counter an inherently sequential operation, or is it possible for $n$ threads to increment a counter in time less than it takes for one thread to increment a counter $n$ times? Art of Multiprocessor Programming

15 Software Combining Tree
4 Contention: All spinning local Parallelism: Potential n/log n speedup A combining tree is a tree of nodes, where each node contains bookkeeping information. The counter's value is stored at the root. Each thread is assigned a leaf, and at most two threads share a leaf. To increment the counter, a thread starts at its leaf, and works its way up the tree to the root. If two threads reach a node at approximately the same time, then they combine their increments by adding them together. One thread, the first thread, propagates their combined increments up the tree, while the other, second thread, waits for the first thread to complete their combined work.. Art of Multiprocessor Programming

16 Art of Multiprocessor Programming
Combining Trees Here, the counter is zero, and the blue and yellow processors are about to add to the counter. Art of Multiprocessor Programming

17 Art of Multiprocessor Programming
Combining Trees +3 Blue registers it’s request to add three at its leaf. Art of Multiprocessor Programming

18 Art of Multiprocessor Programming
Combining Trees +3 +2 At about the same time, yellow adds two to the clunters Art of Multiprocessor Programming

19 Two threads meet, combine sums
Combining Trees Two threads meet, combine sums +3 +2 These two requests meet at the shared node, and are combined .. Art of Multiprocessor Programming

20 Two threads meet, combine sums
Combining Trees +5 Two threads meet, combine sums +3 +2 the blue three and the yellow two are combined into a green five Art of Multiprocessor Programming

21 Combined sum added to root
Combining Trees 5 Combined sum added to root +5 +3 +2 These combined requests are added to the root … Art of Multiprocessor Programming

22 Result returned to children
Combining Trees Result returned to children 5 +3 +2 Art of Multiprocessor Programming

23 Results returned to threads
Combining Trees 5 Results returned to threads 3 Each node remembers enough bookkeeping information to distribute correct results among the callers. Art of Multiprocessor Programming

24 Art of Multiprocessor Programming
Devil in the Details What if threads don’t arrive at the same time? Wait for a partner to show up? How long to wait? Waiting times add up … Instead Use multi-phase algorithm Try to wait in parallel … Art of Multiprocessor Programming

25 Art of Multiprocessor Programming
Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Art of Multiprocessor Programming

26 Art of Multiprocessor Programming
Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Nothing going on Art of Multiprocessor Programming

27 Art of Multiprocessor Programming
Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 1st thread ISO partner for combining, will return soon to check for 2nd thread Art of Multiprocessor Programming

28 2nd thread arrived with value for combining
Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 2nd thread arrived with value for combining Art of Multiprocessor Programming

29 1st thread has completed operation & deposited result for 2nd thread
Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 1st thread has completed operation & deposited result for 2nd thread Art of Multiprocessor Programming

30 Special case: root node
Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Special case: root node Art of Multiprocessor Programming

31 Art of Multiprocessor Programming
Node Synchronization Short-term Synchronized methods Consistency during method call Long-term Boolean locked field Consistency across calls Art of Multiprocessor Programming

32 Art of Multiprocessor Programming
Phases Precombining Set up combining rendez-vous Combining Collect and combine operations Operation Hand off to higher thread Distribution Distribute results to waiting threads Art of Multiprocessor Programming

33 Art of Multiprocessor Programming
Precombining Phase IDLE Examine status Art of Multiprocessor Programming

34 If IDLE, promise to return to look for partner
Precombining Phase If IDLE, promise to return to look for partner FIRST Art of Multiprocessor Programming

35 Art of Multiprocessor Programming
Precombining Phase At ROOT, turn back FIRST Art of Multiprocessor Programming

36 Art of Multiprocessor Programming
Precombining Phase FIRST Art of Multiprocessor Programming

37 If FIRST, I’m willing to combine, but lock for now
Precombining Phase If FIRST, I’m willing to combine, but lock for now SECOND Art of Multiprocessor Programming

38 Art of Multiprocessor Programming
Code Tree class In charge of navigation Node class Combining state Synchronization state Bookkeeping Art of Multiprocessor Programming

39 Precombining Navigation
Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Art of Multiprocessor Programming

40 Precombining Navigation
Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Start at leaf Art of Multiprocessor Programming

41 Precombining Navigation
Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Move up while instructed to do so Art of Multiprocessor Programming

42 Precombining Navigation
Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Remember where we stopped Art of Multiprocessor Programming

43 Art of Multiprocessor Programming
Precombining Node synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Art of Multiprocessor Programming

44 Short-term synchronization
Precombining Node synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Short-term synchronization Art of Multiprocessor Programming

45 Wait while node is locked
Synchronization synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Wait while node is locked Art of Multiprocessor Programming

46 Check combining status
Precombining Node synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Check combining status Art of Multiprocessor Programming

47 I will return to look for combining value
Node was IDLE synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } I will return to look for combining value Art of Multiprocessor Programming

48 Art of Multiprocessor Programming
Precombining Node synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Continue up the tree Art of Multiprocessor Programming

49 Art of Multiprocessor Programming
I’m the 2nd Thread synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } If 1st thread has promised to return, lock node so it won’t leave without me Art of Multiprocessor Programming

50 Prepare to deposit 2nd value
Precombining Node synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Prepare to deposit 2nd value Art of Multiprocessor Programming

51 End of phase 1, don’t continue up tree
Precombining Node End of phase 1, don’t continue up tree synchronized boolean phase1() { while (sStatus==SStatus.BUSY) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Art of Multiprocessor Programming

52 If root, phase 1 ends, don’t continue up tree
Node is the Root If root, phase 1 ends, don’t continue up tree synchronized boolean phase1() { while (sStatus==SStatus.BUSY) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Art of Multiprocessor Programming

53 Always check for unexpected values!
Precombining Node synchronized boolean phase1() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Always check for unexpected values! Art of Multiprocessor Programming

54 1st thread locked out until 2nd provides value
Combining Phase 1st thread locked out until 2nd provides value SECOND +3 Art of Multiprocessor Programming

55 2nd thread deposits value to be combined, unlocks node, & waits …
Combining Phase 2nd thread deposits value to be combined, unlocks node, & waits … SECOND 2 +3 zzz Art of Multiprocessor Programming

56 1st thread moves up the tree with combined value …
Combining Phase +5 1st thread moves up the tree with combined value … SECOND 2 +3 +2 zzz Art of Multiprocessor Programming

57 2nd thread has not yet deposited value …
Combining (reloaded) FIRST 2nd thread has not yet deposited value … Art of Multiprocessor Programming

58 1st thread is alone, locks out late partner
Combining (reloaded) 1st thread is alone, locks out late partner FIRST +3 Art of Multiprocessor Programming

59 Art of Multiprocessor Programming
Combining (reloaded) Stop at root +3 FIRST +3 Art of Multiprocessor Programming

60 2nd thread’s phase 1 visit locked out
Combining (reloaded) +3 FIRST 2nd thread’s phase 1 visit locked out +3 Art of Multiprocessor Programming

61 Art of Multiprocessor Programming
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Art of Multiprocessor Programming

62 Art of Multiprocessor Programming
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Start at leaf Art of Multiprocessor Programming

63 Art of Multiprocessor Programming
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Add 1 Art of Multiprocessor Programming

64 Revisit nodes visited in phase 1
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Revisit nodes visited in phase 1 Art of Multiprocessor Programming

65 Accumulate combined values, if any
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Accumulate combined values, if any Art of Multiprocessor Programming

66 We will retraverse path in reverse order …
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } We will retraverse path in reverse order … Art of Multiprocessor Programming

67 Art of Multiprocessor Programming
Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Move up the tree Art of Multiprocessor Programming

68 Art of Multiprocessor Programming
Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Art of Multiprocessor Programming

69 Wait until node is unlocked
Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Wait until node is unlocked Art of Multiprocessor Programming

70 Lock out late attempts to combine
Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Lock out late attempts to combine Art of Multiprocessor Programming

71 Remember our contribution
Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Remember our contribution Art of Multiprocessor Programming

72 Art of Multiprocessor Programming
Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Check status Art of Multiprocessor Programming

73 Art of Multiprocessor Programming
Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } 1st thread is alone Art of Multiprocessor Programming

74 Art of Multiprocessor Programming
Combining Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combine with 2nd thread Art of Multiprocessor Programming

75 Add combined value to root, start back down (phase 4)
Operation Phase 5 +5 Add combined value to root, start back down (phase 4) +3 +2 zzz Art of Multiprocessor Programming

76 Operation Phase (reloaded)
5 Leave value to be combined … SECOND 2 Art of Multiprocessor Programming

77 Operation Phase (reloaded)
5 Unlock, and wait … SECOND 2 +2 zzz Art of Multiprocessor Programming

78 Operation Phase Navigation
prior = stop.op(combined); Art of Multiprocessor Programming

79 Operation Phase Navigation
prior = stop.op(combined); Get result of combining Art of Multiprocessor Programming

80 Art of Multiprocessor Programming
Operation Phase Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); cStatus = CStatus.IDLE; return result; default: … Art of Multiprocessor Programming

81 Add sum to root, return prior value
At Root synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); cStatus = CStatus.IDLE; return result; default: … Add sum to root, return prior value Art of Multiprocessor Programming

82 Deposit value for later combining …
Intermediate Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); cStatus = CStatus.IDLE; return result; default: … Deposit value for later combining … Art of Multiprocessor Programming

83 Unlock node, notify 1st thread
Intermediate Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); cStatus = CStatus.IDLE; return result; default: … Unlock node, notify 1st thread Art of Multiprocessor Programming

84 Wait for 1st thread to deliver results
Intermediate Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); cStatus = CStatus.IDLE; return result; default: … Wait for 1st thread to deliver results Art of Multiprocessor Programming

85 Art of Multiprocessor Programming
Intermediate Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); cStatus = CStatus.IDLE; return result; default: … Unlock node & return Art of Multiprocessor Programming

86 Art of Multiprocessor Programming
Distribution Phase 5 Move down with result SECOND zzz Art of Multiprocessor Programming

87 Leave result for 2nd thread & lock node
Distribution Phase 5 Leave result for 2nd thread & lock node SECOND 2 zzz Art of Multiprocessor Programming

88 Move result back down tree
Distribution Phase 5 Move result back down tree SECOND 2 zzz Art of Multiprocessor Programming

89 2nd thread awakens, unlocks, takes value
Distribution Phase 5 2nd thread awakens, unlocks, takes value IDLE 3 Art of Multiprocessor Programming

90 Distribution Phase Navigation
while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Art of Multiprocessor Programming

91 Distribution Phase Navigation
while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Traverse path in reverse order Art of Multiprocessor Programming

92 Distribution Phase Navigation
while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Distribute results to waiting 2nd threads Art of Multiprocessor Programming

93 Distribution Phase Navigation
while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Return result to caller Art of Multiprocessor Programming

94 Art of Multiprocessor Programming
Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); default: … Art of Multiprocessor Programming

95 No combining, unlock node & reset
Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); default: … No combining, unlock node & reset Art of Multiprocessor Programming

96 Notify 2nd thread that result is available
Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); default: … Notify 2nd thread that result is available Art of Multiprocessor Programming

97 Art of Multiprocessor Programming
Bad News: High Latency +5 Log n +2 +3 Art of Multiprocessor Programming

98 Good News: Real Parallelism
1 thread +5 +2 +3 2 threads Art of Multiprocessor Programming

99 Art of Multiprocessor Programming
Throughput Puzzles Ideal circumstances All n threads move together, combine n increments in O(log n) time Worst circumstances All n threads slightly skewed, locked out n increments in O(n · log n) time Art of Multiprocessor Programming

100 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Art of Multiprocessor Programming

101 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} How many iterations Art of Multiprocessor Programming

102 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Expected time between incrementing counter Art of Multiprocessor Programming

103 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Take a number Art of Multiprocessor Programming

104 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Pretend to work (more work, less concurrency) Art of Multiprocessor Programming

105 Performance Benchmarks
Alewife NUMA architecture Simulated Throughput: average number of inc operations in 1 million cycle period. Latency: average number of simulator cycles per inc operation. Art of Multiprocessor Programming

106 Art of Multiprocessor Programming
Performance cycles per operation operations per million cycles Latency: Throughput: 90000 90000 80000 80000 Splock 70000 60000 Ctree[n] 60000 50000 50000 40000 30000 30000 Ctree[n] 20000 20000 50 10000 Splock 50 100 150 200 250 300 100 150 200 250 300 num of processors num of processors work = 0 Art of Multiprocessor Programming

107 Art of Multiprocessor Programming
Performance cycles per operation operations per million cycles Latency: Throughput: 90000 90000 80000 80000 Splock 70000 60000 Ctree[n] 60000 50000 50000 40000 30000 30000 Ctree[n] 20000 20000 50 10000 Splock 50 100 150 200 250 300 100 150 200 250 300 num of processors num of processors work = 0 Art of Multiprocessor Programming

108 The Combining Paradigm
Implements any RMW operation When tree is loaded Takes 2 log n steps for n requests Very sensitive to load fluctuations: if the arrival rates drop the combining rates drop overall performance deteriorates! Art of Multiprocessor Programming

109 Combining Load Sensitivity
Throughput Notice Load Fluctuations processors Art of Multiprocessor Programming

110 Art of Multiprocessor Programming
Combining Rate vs Work Art of Multiprocessor Programming

111 Art of Multiprocessor Programming
Better to Wait Longer Short wait Throughput Medium wait Figure~\ref{fig: ctree wait} compares combining tree latency when {\tt work} is high using 3 waiting policies: wait 16 cycles, wait 256 cycles, and wait indefinitely. When the number of processors is larger than 64, indefinite waiting is by far the best policy. This follows since an un-combined token message locks later received token messages from progressing until it returns from traversing the root, so a large performance penalty is paid for each un-combined message. Because the chances of combining are good at higher arrival rates we found that when {\tt work = 0}, simulation using more than four processors justify indefinite waiting. processors Indefinite wait Art of Multiprocessor Programming

112 Art of Multiprocessor Programming
Conclusions Combining Trees Work well under high contention Sensitive to load fluctuations Can be used for getAndMumble() ops Next Counting networks A different approach … Art of Multiprocessor Programming

113 Art of Multiprocessor Programming
A Balancer Input wires Output wires Art of Multiprocessor Programming

114 Tokens Traverse Balancers
Token i enters on any wire leaves on wire i mod (fan-out) Art of Multiprocessor Programming

115 Tokens Traverse Balancers
Art of Multiprocessor Programming

116 Tokens Traverse Balancers
Art of Multiprocessor Programming

117 Tokens Traverse Balancers
Art of Multiprocessor Programming

118 Tokens Traverse Balancers
Art of Multiprocessor Programming

119 Tokens Traverse Balancers
Arbitrary input distribution Balanced output distribution Art of Multiprocessor Programming

120 Art of Multiprocessor Programming
Smoothing Network k-smooth property Art of Multiprocessor Programming

121 Art of Multiprocessor Programming
Counting Network step property Art of Multiprocessor Programming

122 Counting Networks Count!
0, 4, 1, 5, 2, 6,10.... 3, Art of Multiprocessor Programming

123 Art of Multiprocessor Programming
Bitonic[4] Art of Multiprocessor Programming

124 Art of Multiprocessor Programming
Bitonic[4] Art of Multiprocessor Programming

125 Art of Multiprocessor Programming
Bitonic[4] Art of Multiprocessor Programming

126 Art of Multiprocessor Programming
Bitonic[4] Art of Multiprocessor Programming

127 Art of Multiprocessor Programming
Bitonic[4] Art of Multiprocessor Programming

128 Art of Multiprocessor Programming
Bitonic[4] Art of Multiprocessor Programming

129 Art of Multiprocessor Programming
Counting Networks Good for counting number of tokens low contention no sequential bottleneck high throughput practical networks depth Art of Multiprocessor Programming

130 Bitonic[k] is not Linearizable
Art of Multiprocessor Programming

131 Bitonic[k] is not Linearizable
Art of Multiprocessor Programming

132 Bitonic[k] is not Linearizable
2 Art of Multiprocessor Programming

133 Bitonic[k] is not Linearizable
2 Art of Multiprocessor Programming

134 Bitonic[k] is not Linearizable
Problem is: Red finished before Yellow started Red took 2 Yellow took 0 2 Art of Multiprocessor Programming

135 Shared Memory Implementation
class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Art of Multiprocessor Programming

136 Shared Memory Implementation
class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } state Art of Multiprocessor Programming

137 Shared Memory Implementation
class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Output connections to balancers Art of Multiprocessor Programming

138 Shared Memory Implementation
class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Get-and-complement Art of Multiprocessor Programming

139 Shared Memory Implementation
Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Art of Multiprocessor Programming

140 Shared Memory Implementation
Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Stop when we get to the end Art of Multiprocessor Programming

141 Shared Memory Implementation
Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Flip state Art of Multiprocessor Programming

142 Shared Memory Implementation
Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Exit on wire Art of Multiprocessor Programming

143 Alternative Implementation: Message-Passing
Art of Multiprocessor Programming

144 Art of Multiprocessor Programming
Bitonic[2k] Schematic Bitonic[k] Merger[2k] Bitonic[k] Art of Multiprocessor Programming

145 Art of Multiprocessor Programming
Bitonic[2k] Layout Art of Multiprocessor Programming

146 Unfolded Bitonic Network
Art of Multiprocessor Programming

147 Unfolded Bitonic Network
Merger[8] Art of Multiprocessor Programming

148 Unfolded Bitonic Network
Art of Multiprocessor Programming

149 Unfolded Bitonic Network
Merger[4] Merger[4] Art of Multiprocessor Programming

150 Unfolded Bitonic Network
Art of Multiprocessor Programming

151 Unfolded Bitonic Network
Merger[2] Merger[2] Merger[2] Merger[2] Art of Multiprocessor Programming

152 Art of Multiprocessor Programming
Bitonic[k] Depth Width k Depth is (log2 k)(log2 k + 1)/2 Art of Multiprocessor Programming

153 Art of Multiprocessor Programming
Merger[2k] Merger[2k] Art of Multiprocessor Programming

154 Art of Multiprocessor Programming
Merger[2k] Schematic Merger[k] Art of Multiprocessor Programming

155 Art of Multiprocessor Programming
Merger[2k] Layout Art of Multiprocessor Programming

156 Art of Multiprocessor Programming
Lemma If a sequence has the step property … Art of Multiprocessor Programming

157 Art of Multiprocessor Programming
Lemma So does its even subsequence Art of Multiprocessor Programming

158 Art of Multiprocessor Programming
Lemma And its odd subsequence Art of Multiprocessor Programming

159 Art of Multiprocessor Programming
Merger[2k] Schematic even Merger[k] Bitonic[k] odd Bitonic[k] odd Merger[k] even Art of Multiprocessor Programming

160 Proof Outline even odd odd even Outputs from Bitonic[k]
Inputs to Merger[k] Art of Multiprocessor Programming

161 Art of Multiprocessor Programming
Proof Outline even odd odd even Inputs to Merger[k] Outputs of Merger[k] Art of Multiprocessor Programming

162 Art of Multiprocessor Programming
Proof Outline Outputs of Merger[k] Outputs of last layer Art of Multiprocessor Programming

163 Periodic Network Block
Art of Multiprocessor Programming

164 Periodic Network Block
Art of Multiprocessor Programming

165 Periodic Network Block
Art of Multiprocessor Programming

166 Periodic Network Block
Art of Multiprocessor Programming

167 Art of Multiprocessor Programming
Block[2k] Schematic Block[k] Block[k] Art of Multiprocessor Programming

168 Art of Multiprocessor Programming
Block[2k] Layout Art of Multiprocessor Programming

169 Art of Multiprocessor Programming
Periodic[8] Art of Multiprocessor Programming

170 Art of Multiprocessor Programming
Network Depth Each block[k] has depth log2 k Need log2 k blocks Grand total of (log2 k)2 Art of Multiprocessor Programming

171 Art of Multiprocessor Programming
Lower Bound on Depth Theorem: The depth of any width w counting network is at least Ω(log w). Theorem: there exists a counting network of θ(log w) depth. Unfortunately, proof is non-constructive and constants in the 1000s. Art of Multiprocessor Programming

172 Art of Multiprocessor Programming
Sequential Theorem If a balancing network counts Sequentially, meaning that Tokens traverse one at a time Then it counts Even if tokens traverse concurrently Art of Multiprocessor Programming

173 Art of Multiprocessor Programming
Red First, Blue Second Art of Multiprocessor Programming (2)

174 Art of Multiprocessor Programming
Blue First, Red Second Art of Multiprocessor Programming (2)

175 Art of Multiprocessor Programming
Either Way Same balancer states Art of Multiprocessor Programming

176 Order Doesn’t Matter Same balancer states Same output distribution
Art of Multiprocessor Programming

177 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i = 0 < iters) { i = fetch&inc(); Thread.sleep(random() % work); } Art of Multiprocessor Programming

178 Performance (Simulated)
Higher is better! Throughput Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. Art of Multiprocessor Programming

179 Performance (Simulated)
64-leaf combining tree 80-balancer counting network Throughput Higher is better! Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. Art of Multiprocessor Programming

180 Performance (Simulated)
64-leaf combining tree 80-balancer counting network Combining and counting are pretty close Throughput Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. Art of Multiprocessor Programming

181 Performance (Simulated)
64-leaf combining tree 80-balancer counting network But they beat the hell out of the competition! Throughput Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. Art of Multiprocessor Programming

182 Saturation and Performance
Undersaturated P < w log w Optimal performance Saturated P = w log w Oversaturated P > w log w Art of Multiprocessor Programming

183 Art of Multiprocessor Programming
Throughput vs. Size Bitonic[16] Throughput Bitonic[8] Bitonic[4] The simulation was extended to 80 processors, one balancer per processor in The 16 wide network, and as can be seen the counting network scales… Number processors Art of Multiprocessor Programming

184 Art of Multiprocessor Programming
Shared Pool Put counting network Remove counting network Art of Multiprocessor Programming

185 Art of Multiprocessor Programming
Put/Remove Network Guarantees never: Put waiting for item, while Get has deposited item Otherwise OK to wait Put delayed while pool slot is full Get delayed while pool slot is empty Art of Multiprocessor Programming

186 Art of Multiprocessor Programming
What About Decrements Adding arbitrary values Other operations Multiplication Vector addition Horoscope casting … Art of Multiprocessor Programming

187 Art of Multiprocessor Programming
First Step Can we decrement as well as increment? What goes up, must come down … Art of Multiprocessor Programming

188 Art of Multiprocessor Programming
Anti-Tokens Art of Multiprocessor Programming

189 Tokens & Anti-Tokens Cancel
Art of Multiprocessor Programming

190 Tokens & Anti-Tokens Cancel
Art of Multiprocessor Programming

191 Tokens & Anti-Tokens Cancel
Art of Multiprocessor Programming

192 Tokens & Anti-Tokens Cancel
As if nothing happened Art of Multiprocessor Programming

193 Art of Multiprocessor Programming
Tokens vs Antitokens Tokens read balancer flip proceed Antitokens flip balancer read proceed Art of Multiprocessor Programming

194 Pumping Lemma Eventually, after Ω tokens, network repeats a state
Keep pumping tokens through one wire Art of Multiprocessor Programming

195 Art of Multiprocessor Programming
Anti-Token Effect token anti-token Art of Multiprocessor Programming

196 Art of Multiprocessor Programming
Observation Each anti-token on wire i Has same effect as Ω-1 tokens on wire i So network still in legal state Moreover, network width w divides Ω So Ω-1 tokens Art of Multiprocessor Programming

197 Art of Multiprocessor Programming
Before Antitoken Art of Multiprocessor Programming

198 Balancer states as if … Ω-1 is one brick shy of a load Ω-1
Art of Multiprocessor Programming

199 Post Antitoken Next token shows up here
Art of Multiprocessor Programming

200 Art of Multiprocessor Programming
Implication Counting networks with Tokens (+1) Anti-tokens (-1) Give Highly concurrent Low contention getAndIncrement + getAndDecrement methods QED Art of Multiprocessor Programming

201 Art of Multiprocessor Programming
Adding Networks Combining trees implement Fetch&add Add any number, not just 1 What about counting networks? Art of Multiprocessor Programming

202 Art of Multiprocessor Programming
Fetch-and-add Beyond getAndIncrement + getAndDecrement What about getAndAdd(x)? Atomically returns prior value And adds x to value? Not to mention getAndMultiply getAndFourierTransform? Art of Multiprocessor Programming

203 Art of Multiprocessor Programming
Bad News If an adding network Supports n concurrent tokens Then every token must traverse At least n-1 balancers In sequential executions Art of Multiprocessor Programming

204 Art of Multiprocessor Programming
Uh-Oh Adding network size depends on n Like combining trees Unlike counting networks High latency Depth linear in n Not logarithmic in w Art of Multiprocessor Programming

205 Generic Counting Network
+1 +2 +2 2 +2 2 Art of Multiprocessor Programming

206 First Token First token would visit green balancers if it runs solo +1
+2 First token would visit green balancers if it runs solo +2 +2 Art of Multiprocessor Programming

207 Art of Multiprocessor Programming
Claim Look at path of +1 token All other +2 tokens must visit some balancer on +1 token’s path Art of Multiprocessor Programming

208 Art of Multiprocessor Programming
Second Token +1 +2 +2 Takes 0 +2 Art of Multiprocessor Programming

209 Second Token Takes 0 Takes 0 They can’t both take zero! +1 +2 +2 +2
Art of Multiprocessor Programming

210 If Second avoids First’s Path
Second token Doesn’t observe first First hasn’t run Chooses 0 First token Doesn’t observe second Disjoint paths Art of Multiprocessor Programming

211 If Second avoids First’s Path
Because +1 token chooses 0 It must be ordered first So +2 token ordered second So +2 token should return 1 Something’s wrong! Art of Multiprocessor Programming

212 Second Token Halt blue token before first green balancer +1 +2 +2 +2
Art of Multiprocessor Programming

213 Art of Multiprocessor Programming
Third Token +1 +2 +2 +2 Takes 0 or 2 Art of Multiprocessor Programming

214 Third Token +1 Takes 0 +2 They can’t both take zero, and they can’t take 0 and 2! +2 +2 Takes 0 or 2 Art of Multiprocessor Programming

215 First,Second, & Third Tokens must be Ordered
Did not observe +1 token May have observed earlier +2 token Takes an even number Art of Multiprocessor Programming

216 First,Second, & Third Tokens must be Ordered
Because +1 token’s path is disjoint It chooses 0 Ordered first Rest take odd numbers But last token takes an even number Something’s wrong! Art of Multiprocessor Programming

217 Third Token Halt blue token before first green balancer +1 +2 +2 +2
Art of Multiprocessor Programming

218 Art of Multiprocessor Programming
Continuing in this way We can “park” a token In front of a balancer That token #1 will visit There are n-1 other tokens Two wires per balancer Path includes n-1 balancers! Art of Multiprocessor Programming

219 Art of Multiprocessor Programming
Theorem In any adding network In sequential executions Tokens traverse at least n-1 balancers Same arguments apply to Linearizable counting networks Multiplying networks And others Art of Multiprocessor Programming

220 Art of Multiprocessor Programming
Clip Art Art of Multiprocessor Programming

221 Art of Multiprocessor Programming
          This work is licensed under a Creative Commons Attribution-ShareAlike 2.5 License. You are free: to Share — to copy, distribute and transmit the work to Remix — to adapt the work Under the following conditions: Attribution. You must attribute the work to “The Art of Multiprocessor Programming” (but not in any way that suggests that the authors endorse you or your use of the work). Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author's moral rights. Art of Multiprocessor Programming


Download ppt "Shared Counters and Parallelism"

Similar presentations


Ads by Google