Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for.

Similar presentations


Presentation on theme: "Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for."— Presentation transcript:

1 Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for CIS640 University of Pennsylvania

2 Art of Multiprocessor Programming 2 A Shared Pool Put –Inserts object –blocks if full Remove –Removes & returns an object –blocks if empty public interface Pool { public void put(Object x); public Object remove(); } Unordered set of objects

3 Art of Multiprocessor Programming 3 put Simple Locking Implementation put

4 Art of Multiprocessor Programming 4 put Simple Locking Implementation put Problem: hot- spot contention

5 Art of Multiprocessor Programming 5 put Simple Locking Implementation Problem: hot- spot contention Problem: sequential bottleneck put

6 Art of Multiprocessor Programming 6 put Simple Locking Implementation Problem: hot- spot contention Problem: sequential bottleneck put Solution: Queue Lock

7 Art of Multiprocessor Programming 7 put Simple Locking Implementation Problem: hot- spot contention Problem: sequential bottleneck put Solution: Queue Lock Solution ???

8 Art of Multiprocessor Programming 8 Counting Implementation 19 20 21 remove put 19 20 21

9 Art of Multiprocessor Programming 9 Counting Implementation 19 20 21 Only the counters are sequential remove put 19 20 21

10 Art of Multiprocessor Programming 10 Shared Counter 3 2 1 0 1 2 3

11 Art of Multiprocessor Programming 11 Shared Counter 3 2 1 0 1 2 3 No duplication

12 Art of Multiprocessor Programming 12 Shared Counter 3 2 1 0 1 2 3 No duplication No Omission

13 Art of Multiprocessor Programming 13 Shared Counter 3 2 1 0 1 2 3 Not necessarily linearizable No duplication No Omission

14 Art of Multiprocessor Programming 14 Shared Counters Can we build a shared counter with –Low memory contention, and –Real parallelism? Locking –Can use queue locks to reduce contention –No help with parallelism issue …

15 Art of Multiprocessor Programming 15 Software Combining Tree 4 Contention: All spinning local Parallelism: Potential n/log n speedup

16 Art of Multiprocessor Programming 16 Combining Trees 0

17 Art of Multiprocessor Programming 17 Combining Trees 0 +3

18 Art of Multiprocessor Programming 18 Combining Trees 0 +3 +2

19 Art of Multiprocessor Programming 19 Combining Trees 0 +3 +2 Two threads meet, combine sums

20 Art of Multiprocessor Programming 20 Combining Trees 0 +3 +2 Two threads meet, combine sums +5

21 Art of Multiprocessor Programming 21 Combining Trees 5 +3 +2 +5 Combined sum added to root

22 Art of Multiprocessor Programming 22 Combining Trees 5 +3 +2 0 Result returned to children

23 Art of Multiprocessor Programming 23 Combining Trees 5 0 0 3 0 Results returned to threads

24 Art of Multiprocessor Programming 24 Devil in the Details What if –threads don’t arrive at the same time? Wait for a partner to show up? –How long to wait? –Waiting times add up … Instead –Use multi-phase algorithm –Try to wait in parallel …

25 Art of Multiprocessor Programming 25 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT};

26 Art of Multiprocessor Programming 26 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Nothing going on

27 Art of Multiprocessor Programming 27 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 1 st thread partner for combining, will return soon to check for 2 nd thread

28 Art of Multiprocessor Programming 28 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 2 nd thread arrived with value for combining

29 Art of Multiprocessor Programming 29 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; 1 st thread has completed operation & deposited result for 2 nd thread

30 Art of Multiprocessor Programming 30 Combining Status enum CStatus{ IDLE, FIRST, SECOND, DONE, ROOT}; Special case: root node

31 Art of Multiprocessor Programming 31 Node Synchronization Short-term –Synchronized methods –Consistency during method call Long-term –Boolean locked field –Consistency across calls

32 Art of Multiprocessor Programming 32 Phases Precombining –Set up combining rendez-vous Combining –Collect and combine operations Operation –Hand off to higher thread Distribution –Distribute results to waiting threads

33 Art of Multiprocessor Programming 33 Precombining Phase 0 Examine status IDLE

34 Art of Multiprocessor Programming 34 Precombining Phase 0 0 If IDLE, promise to return to look for partner FIRST

35 Art of Multiprocessor Programming 35 Precombining Phase 0 At ROOT, turn back FIRST

36 Art of Multiprocessor Programming 36 Precombining Phase 0 FIRST

37 Art of Multiprocessor Programming 37 Precombining Phase 0 0 SECOND If FIRST, I’m willing to combine, but lock for now

38 Art of Multiprocessor Programming 38 Code Tree class –In charge of navigation Node class –Combining state –Synchronization state –Bookkeeping

39 Art of Multiprocessor Programming 39 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node;

40 Art of Multiprocessor Programming 40 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Start at leaf

41 Art of Multiprocessor Programming 41 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Move up while instructed to do so

42 Art of Multiprocessor Programming 42 Precombining Navigation Node node = myLeaf; while (node.precombine()) { node = node.parent; } Node stop = node; Remember where we stopped

43 Art of Multiprocessor Programming 43 Precombining Node synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() }

44 Art of Multiprocessor Programming 44 synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Precombining Node Short-term synchronization

45 Art of Multiprocessor Programming 45 synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Synchronization Wait while node is locked

46 Art of Multiprocessor Programming 46 synchronized boolean precombine() { while (locked) wait(); switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Precombining Node Check combining status

47 Art of Multiprocessor Programming 47 Node was IDLE synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } I will return to look for combining value

48 Art of Multiprocessor Programming 48 Precombining Node synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Continue up the tree

49 Art of Multiprocessor Programming 49 I’m the 2 nd Thread synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } If 1 st thread has promised to return, lock node so it won’t leave without me

50 Art of Multiprocessor Programming 50 Precombining Node synchronized boolean precombine() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Prepare to deposit 2 nd value

51 Art of Multiprocessor Programming 51 Precombining Node synchronized boolean phase1() { while (sStatus==SStatus.BUSY) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } End of phase 1, don’t continue up tree

52 Art of Multiprocessor Programming 52 Node is the Root synchronized boolean phase1() { while (sStatus==SStatus.BUSY) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } If root, phase 1 ends, don’t continue up tree

53 Art of Multiprocessor Programming 53 Precombining Node synchronized boolean phase1() { while (locked) {wait();} switch (cStatus) { case IDLE: cStatus = CStatus.FIRST; return true; case FIRST: locked = true; cStatus = CStatus.SECOND; return false; case ROOT: return false; default: throw new PanicException() } Always check for unexpected values!

54 Art of Multiprocessor Programming 54 Combining Phase 0 0 SECOND 1 st thread locked out until 2 nd provides value +3

55 Art of Multiprocessor Programming 55 Combining Phase 0 0 SECOND 2 nd thread deposits value to be combined, unlocks node, & waits … 2 +3 zzz

56 Art of Multiprocessor Programming 56 Combining Phase +3 +2 +5 SECOND 2 0 1 st thread moves up the tree with combined value … zzz

57 Art of Multiprocessor Programming 57 Combining (reloaded) 0 0 2 nd thread has not yet deposited value … FIRST

58 Art of Multiprocessor Programming 58 Combining (reloaded) 0 +3 FIRST 1 st thread is alone, locks out late partner

59 Art of Multiprocessor Programming 59 Combining (reloaded) 0 +3 FIRST Stop at root

60 Art of Multiprocessor Programming 60 Combining (reloaded) 0 +3 FIRST 2 nd thread’s phase 1 visit locked out

61 Art of Multiprocessor Programming 61 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; }

62 Art of Multiprocessor Programming 62 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Start at leaf

63 Art of Multiprocessor Programming 63 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Add 1

64 Art of Multiprocessor Programming 64 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Revisit nodes visited in phase 1

65 Art of Multiprocessor Programming 65 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Accumulate combined values, if any

66 Art of Multiprocessor Programming 66 node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Combining Navigation We will retraverse path in reverse order …

67 Art of Multiprocessor Programming 67 Combining Navigation node = myLeaf; int combined = 1; while (node != stop) { combined = node.combine(combined); stack.push(node); node = node.parent; } Move up the tree

68 Art of Multiprocessor Programming 68 Combining Phase Node synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … }

69 Art of Multiprocessor Programming 69 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Wait until node is unlocked

70 Art of Multiprocessor Programming 70 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Lock out late attempts to combine

71 Art of Multiprocessor Programming 71 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Remember our contribution

72 Art of Multiprocessor Programming 72 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node Check status

73 Art of Multiprocessor Programming 73 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Phase Node 1 st thread is alone

74 Art of Multiprocessor Programming 74 synchronized int combine(int combined) { while (locked) wait(); locked = true; firstValue = combined; switch (cStatus) { case FIRST: return firstValue; case SECOND: return firstValue + secondValue; default: … } Combining Node Combine with 2 nd thread

75 Art of Multiprocessor Programming 75 Operation Phase 5 +3 +2 +5 Add combined value to root, start back down (phase 4) zzz

76 Art of Multiprocessor Programming 76 Operation Phase (reloaded) 5 Leave value to be combined … SECOND 2

77 Art of Multiprocessor Programming 77 Operation Phase (reloaded) 5 +2 Unlock, and wait … SECOND 2 zzz

78 Art of Multiprocessor Programming 78 Operation Phase Navigation prior = stop.op(combined);

79 Art of Multiprocessor Programming 79 Operation Phase Navigation prior = stop.op(combined); Get result of combining

80 Art of Multiprocessor Programming 80 Operation Phase Node synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: …

81 Art of Multiprocessor Programming 81 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … At Root Add sum to root, return prior value

82 Art of Multiprocessor Programming 82 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Deposit value for later combining …

83 Art of Multiprocessor Programming 83 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Unlock node, notify 1 st thread

84 Art of Multiprocessor Programming 84 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Wait for 1 st thread to deliver results

85 Art of Multiprocessor Programming 85 synchronized int op(int combined) { switch (cStatus) { case ROOT: int oldValue = result; result += combined; return oldValue; case SECOND: secondValue = combined; locked = false; notifyAll(); while (cStatus != CStatus.DONE) wait(); locked = false; notifyAll(); cStatus = CStatus.IDLE; return result; default: … Intermediate Node Unlock node & return

86 Art of Multiprocessor Programming 86 Distribution Phase 5 0 zzz Move down with result SECOND

87 Art of Multiprocessor Programming 87 Distribution Phase 5 zzz Leave result for 2 nd thread & lock node SECOND 2

88 Art of Multiprocessor Programming 88 Distribution Phase 5 0 zzz Move result back down tree SECOND 2

89 Art of Multiprocessor Programming 89 Distribution Phase 5 2 nd thread awakens, unlocks, takes value IDLE 3

90 Art of Multiprocessor Programming 90 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior;

91 Art of Multiprocessor Programming 91 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Traverse path in reverse order

92 Art of Multiprocessor Programming 92 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Distribute results to waiting 2 nd threads

93 Art of Multiprocessor Programming 93 Distribution Phase Navigation while (!stack.empty()) { node = stack.pop(); node.distribute(prior); } return prior; Return result to caller

94 Art of Multiprocessor Programming 94 Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); return; default: …

95 Art of Multiprocessor Programming 95 Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); return; default: … No combining, unlock node & reset

96 Art of Multiprocessor Programming 96 Distribution Phase synchronized void distribute(int prior) { switch (cStatus) { case FIRST: cStatus = CStatus.IDLE; locked = false; notifyAll(); return; case SECOND: result = prior + firstValue; cStatus = CStatus.DONE; notifyAll(); return; default: … Notify 2 nd thread that result is available

97 Art of Multiprocessor Programming 97 Bad News: High Latency +2 +3 +5 Log n

98 Art of Multiprocessor Programming 98 Good News: Real Parallelism +2 +3 +5 2 threads 1 thread

99 Art of Multiprocessor Programming 99 Throughput Puzzles Ideal circumstances –All n threads move together, combine –n increments in O(log n) time Worst circumstances –All n threads slightly skewed, locked out –n increments in O(n · log n) time

100 Art of Multiprocessor Programming 100 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }}

101 Art of Multiprocessor Programming 101 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} How many iterations

102 Art of Multiprocessor Programming 102 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Expected time between incrementing counter

103 Art of Multiprocessor Programming 103 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Take a number

104 Art of Multiprocessor Programming 104 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i < iters) { i = r.getAndIncrement(); Thread.sleep(random() % work); }} Pretend to work (more work, less concurrency)

105 Art of Multiprocessor Programming 105 Performance Benchmarks Alewife –NUMA architecture –Simulated Throughput: –average number of inc operations in 1 million cycle period. Latency: –average number of simulator cycles per inc operation.

106 Art of Multiprocessor Programming 106 Performance Latency: Throughput: 90000 80000 60000 50000 30000 20000 0 0 50100150 200250 300 Splock Ctree[n] num of processors cycles per operation 100150 200250 300 90000 80000 70000 60000 50000 40000 30000 20000 Splock Ctree[n] operations per million cycles 50 10000 0 0 num of processors work = 0

107 Art of Multiprocessor Programming 107 Performance Latency: Throughput: 90000 80000 60000 50000 30000 20000 0 0 50100150 200250 300 Splock Ctree[n] num of processors cycles per operation 100150 200250 300 90000 80000 70000 60000 50000 40000 30000 20000 Splock Ctree[n] operations per million cycles 50 10000 0 0 num of processors work = 0

108 Art of Multiprocessor Programming 108 The Combining Paradigm Implements any RMW operation When tree is loaded –Takes 2 log n steps –for n requests Very sensitive to load fluctuations: –if the arrival rates drop –the combining rates drop –overall performance deteriorates!

109 Art of Multiprocessor Programming 109 Conclusions Combining Trees –Work well under high contention –Sensitive to load fluctuations –Can be used for getAndMumble() ops Next lecture –Counting networks –A different approach …

110 640 Class Project Can be done in groups of 2 Presentation during week of 4/27 Written report (5-10 pages) Topics –Parallelization –Verification –Some relevant topic of interest to you

111 Parallelization How to efficiently solve a problem in parallel –Lonestar Benchmark (e.g. Delauney triangulation, N-body simulation) –Diffusion Implement a solution –Evaluate speed-up –Try optimizations using fine-grained synchronization

112 Verification Debugging parallel programs is hard –Use MSR’s Chess tool for state-space exporation of C/C++/C# programs –Produce “correct” implementations of some of the concurrent data structures we studied in C# Formal verification of concurrent data structures –Can we prove correctness? –In some theorem prover, e.g., Coq

113 A Balancer Input wires Output wires

114 Tokens Traverse Balancers Token i enters on any wire leaves on wire i mod (fan-out)

115 Tokens Traverse Balancers

116

117

118

119 Arbitrary input distribution Balanced output distribution

120 Smoothing Network k-smooth property

121 Counting Network step property

122 Counting Networks Count! 0, 4, 8..... 1, 5, 9..... 2, 6,10.... 3, 7........

123 Bitonic[4]

124

125

126

127

128

129 Counting Networks Good for counting number of tokens low contention no sequential bottleneck high throughput practical networks depth

130 Bitonic[k] is not Linearizable

131

132 2

133 2 0

134 2 0 Problem is: Red finished before Yellow started Red took 2 Yellow took 0

135 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; }

136 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } state

137 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Output connections to balancers

138 Shared Memory Implementation class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } Get-and-complement

139 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; }

140 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Stop when we get to the end

141 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Flip state

142 Shared Memory Implementation Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } Exit on wire

143 Bitonic[2k] Schematic Bitonic[k] Merger[2k]

144 Bitonic[2k] Layout

145 Unfolded Bitonic Network

146 Merger[8]

147 Unfolded Bitonic Network

148 Merger[4]

149 Unfolded Bitonic Network

150 Merger[2]

151 Bitonic[k] Depth Width k Depth is (log 2 k)(log 2 k + 1)/2

152 Merger[2k]

153 Merger[2k] Schematic Merger[k]

154 Merger[2k] Layout

155 Lemma If a sequence has the step property …

156 Lemma So does its even subsequence

157 Lemma And its odd subsequence

158 Merger[2k] Schematic Merger[k] Bitonic[k] even odd even

159 Proof Outline Outputs from Bitonic[k] Inputs to Merger[k] even odd even

160 Proof Outline Inputs to Merger[k] even odd even Outputs of Merger[k]

161 Proof Outline Outputs of Merger[k]Outputs of last layer

162 Periodic Network Block

163

164

165

166 Block[2k] Schematic Block[k]

167 Block[2k] Layout

168 Periodic[8]

169 Network Depth Each block[k] has depth log 2 k Need log 2 k blocks Grand total of (log 2 k) 2

170 Lower Bound on Depth Theorem: The depth of any width w counting network is at least Ω(log w). Theorem: there exists a counting network of θ(log w) depth. Unfortunately, proof is non-constructive and constants in the 1000s.

171 Sequential Theorem If a balancing network counts –Sequentially, meaning that –Tokens traverse one at a time Then it counts –Even if tokens traverse concurrently

172 Red First, Blue Second (2)

173 Blue First, Red Second (2)

174 Either Way Same balancer states

175 Order Doesn’t Matter Same balancer states Same output distribution

176 Index Distribution Benchmark void indexBench(int iters, int work) { while (int i = 0 < iters) { i = fetch&inc(); Thread.sleep(random() % work); }

177 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput Higher is better!

178 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput 64-leaf combining tree 80-balancer counting network Higher is better!

179 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput 64-leaf combining tree 80-balancer counting network Combining and counting are pretty close

180 Performance (Simulated) * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. MCS queue lock Spin lock Number processors Throughput 64-leaf combining tree 80-balancer counting network But they beat the hell out of the competition!

181 Throughput vs. Size Bitonic[16] Bitonic[4] Bitonic[8] Number processors Throughput

182 Shared Pool Put counting network Remove counting network

183 Put/Remove Network Guarantees never: –Put waiting for item, while –Get has deposited item Otherwise OK to wait –Put delayed while pool slot is full –Get delayed while pool slot is empty

184 What About Decrements Adding arbitrary values Other operations –Multiplication –Vector addition..

185 Anti-Tokens

186 Tokens & Anti-Tokens Cancel

187

188

189 As if nothing happened

190 Tokens vs Antitokens Tokens –read balancer –flip –proceed Antitokens –flip balancer –read –proceed

191 Pumping Lemma Keep pumping tokens through one wire Eventually, after Ω tokens, network repeats a state

192 Anti-Token Effect token anti-token

193 Observation Each anti-token on wire i –Has same effect as Ω-1 tokens on wire i –So network still in legal state Moreover, network width w divides Ω –So Ω-1 tokens

194 Before Antitoken

195 Balancer states as if … Ω-1 Ω-1 is one brick shy of a load

196 Post Antitoken Next token shows up here

197 Implication Counting networks with –Tokens (+1) –Anti-tokens (-1) Give –Highly concurrent –Low contention getAndIncrement + getAndDecrement methods QED


Download ppt "Shared Counters and Parallelism Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for."

Similar presentations


Ads by Google