Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pools, Counters, and Parallelism

Similar presentations


Presentation on theme: "Pools, Counters, and Parallelism"— Presentation transcript:

1 Pools, Counters, and Parallelism
Multiprocessor Synchronization Nir Shavit Spring 2003

2 Unordered set of objects
A Shared Pool public interface Pool { public void put(Object x); public Object remove(); } Unordered set of objects Put Inserts object blocks if full Remove Removes & returns an object blocks if empty 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

3 Simple Locking Implementation
Solution: Queue Lock put Problem: hot-spot contention put Problem: sequential bottleneck Solution??? 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

4 Counting Implementation
put 19 20 21 19 20 21 remove Only the counters are sequential 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

5 Not necessarily linearizable
Shared Counter No duplication No Omission 3 2 1 1 2 3 Not necessarily linearizable 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

6 Shared Counters Can we build a shared counter with Locking
Low memory contention, and Real parallelism? Locking Can use queue locks to reduce contention No help with parallelism issue … 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

7 Software Combining Tree
P2 1,2,3 4 P3 P3 1 P5 P6 Pn Contention: only local spinning Combine increment requests up tree, wait for winners to propagate answers down Parallelism: high combining rate implies n/log n speed-up 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

8 Two threads meet, combine sums
Combining Trees +5 +3 Two threads meet, combine sums +2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

9 Combined sum added to root
Combining Trees 5 Combined sum added to root +5 +3 +2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

10 New value returned to children
Combining Trees 5 New value returned to children 5 +3 +2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

11 Prior values returned to threads
Combining Trees 5 5 Prior values returned to threads 3 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

12 Tree Node Status FREE COMBINE RESULT ROOT Nothing going on
Waiting to combine RESULT Results ready ROOT Stop here 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

13 Code public class CTree { static final int ROOT = 0; // stop here
static final int COMBINE = 1; // waiting to combine static final int RESULT = 2; // your results are ready static final int FREE = 3; // nothing happening here private Lock lock; int status; boolean waitFlag; int firstIncr, secondIncr, result; public int fetchAdd() { int lastLevel = 0, savedResult = 0; CTree node = null; 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

14 Phase One: find path up to first COMBINE or ROOT node
5 Lock node, examine status +2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

15 Phase One: found FREE node
5 Set to COMBINE, unlock, move up FREE +2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

16 Phase One: found RESULT node
5 Unlock & wait for status change RESULT 2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

17 Phase One: found COMBINE or ROOT node
5 COMBINE Unlock, start phase two 2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

18 Phase One // Part One, find path up to first COMBINE or ROOT node
int level = FIRST_LEVEL; boolean going_up = true; int pid = Thread.myIndex(); while (going_up) { node = getNode(level, pid); node.lock.acquire(); switch (node.status) { case RESULT: node.lock.release(); while (node.status == RESULT) {}; break; case FREE: node.status = COMBINE; level = level + 1; default: // COMBINE or ROOT node lastLevel = level; going_up = false; }} 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

19 Phase Two: revisit path, combine if requested
+5 +2 +3 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

20 Phase Two // Part Two, lock path from Part 1, combining values along the way int total = 1; level = FIRST_LEVEL; while (level < lastLevel) { CTree visited = getNode(level, pid); visited.lock.acquire(); visited.firstIncr = total; if (visited.waitFlag) { // do combining total = total + visited.secondIncr; } level = level + 1; 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

21 Phase Three: stopped at COMBINE node
+5 Lock node, deposit value, unlock & wait for my value to be combined by the other thread +2 +3 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

22 Phase Three: stopped at ROOT
+5 Add my value to root and start back down +2 +3 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

23 Phase Three // Part Three, operate on last visited node
if (node.status == COMBINE) { node.secondIncr = total; node.waitFlag = true; while (node.status == COMBINE) { node.lock.release(); while (node.status == COMBINE) {} node.lock.acquire(); } node.waitFlag = false; savedResult = node.result; node.status = FREE; } else { node.result = node.result + total; 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

24 Phase Four: propagate values down
COMBINE 5 2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

25 Phase Four // Part Four, descend tree and distribute results
node.lock.release(); level = lastLevel - 1; while (level >= FIRST_LEVEL) { CTree visited = getNode(level, pid); if (visited.waitFlag) { visited.status = RESULT; visited.result = savedResult + visited.firstIncr; } else { visited.status = FREE; visited.lock.release(); level = level - 1; }} return savedResult; 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

26 Bad News: High Latency Log n +5 +2 +3 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

27 Good News: Real Parallelism
1 thread +5 +2 +3 2 threads 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

28 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i < iters) { i = fetch&inc(); Thread.sleep(random() % work); }} Take a number Pretend to work (more work, less concurrency) 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

29 Performance Benchmarks
Alewife DSM architecture Simulated Throughput: average number of inc operations in 1 million cycle period. Latency: average number of simulator cycles per inc operation. 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

30 The Alewife Topology 5-Apr-19 M. Herlihy & N. Shavit (c) 2003
* Posters courtesy of the MIT Architecture Group 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

31 Performance work = 0 Latency: Throughput: cycles per operation
operations per million cycles Latency: Throughput: 90000 90000 80000 80000 Splock 70000 60000 Ctree[n] 60000 50000 50000 40000 30000 30000 Ctree[n] 20000 20000 50 10000 Splock 50 100 150 200 250 300 100 150 200 250 300 num of processors num of processors work = 0 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

32 The Combining Paradigm
Implements any RMW operation When tree is loaded Takes 2 log n steps for n requests Very sensitive to load fluctuations: if the arrival rates drop the combining rates drop overall performance deteriorates! 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

33 Combining Load Sensitivity
work = 500 Notice Load Fluctuations 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

34 Combining Load Sensitivity
Concurrency W=100 W=1000 W=5000 1 0.0 2 10.3 2.0 0.3 4 26.1 8.5 2.2 8 40.3 19.9 10.6 16 50.4 31.7 32 55.3 39.5 18.5 48 54.2 40.0 15.6 64 65.5 18.7 As work increases, concurrency decreases, and combining levels drop significantly… 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

35 Better to Wait Longer Short wait Medium wait Indefinite wait 5-Apr-19
Figure~\ref{fig: ctree wait} compares combining tree latency when {\tt work} is high using 3 waiting policies: wait 16 cycles, wait 256 cycles, and wait indefinitely. When the number of processors is larger than 64, indefinite waiting is by far the best policy. This follows since an un-combined token message locks later received token messages from progressing until it returns from traversing the root, so a large performance penalty is paid for each un-combined message. Because the chances of combining are good at higher arrival rates we found that when {\tt work = 0}, simulation using more than four processors justify indefinite waiting. Indefinite wait 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

36 Can we do better? Centralized: One slow process delays everybody!
Synchronized: Wait for others to bring numbers. Distributed: Spread out work across machine Coordinated: Coordinate but do not wait 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

37 . Counting Networks How to coordinate access to counters?
P1 Counting Network C 0, 4, How to coordinate access to counters? P2 . C 1, 5, C 2, 6, C Pn 3, No duplication, No omission 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

38 A Balancer Input wires Output wires 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

39 Tokens Traverse Balancers
Token i enters on any wire leaves on wire i mod (fan-out) 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

40 Tokens Traverse Balancers
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

41 Tokens Traverse Balancers
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

42 Tokens Traverse Balancers
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

43 Tokens Traverse Balancers
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

44 Tokens Traverse Balancers
Arbitrary input distribution Balanced output distribution 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

45 All tokens that entered have exited
Quiescent State All tokens that entered have exited 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

46 Smoothing Network k-smooth property 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

47 Counting Network step property 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

48 Counting Networks Count!
0, 4, 1, 5, 2, 6,10.... 3, 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

49 Bitonic[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

50 Bitonic[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

51 Bitonic[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

52 Bitonic[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

53 Bitonic[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

54 Bitonic[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

55 Counting Networks Good for counting number of tokens low contention
no sequential bottleneck high throughput 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

56 Shared Memory Implementation
1+4k 0/1 0/1 0/1 2+4k 3+4k 0/1 0/1 0/1 4+4k 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

57 Shared Memory Implementation
class balancer { boolean toggle; balancer[] next; synchronized boolean flip() { boolean oldValue = this.toggle; this.toggle = !this.toggle; return oldValue; } 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

58 Shared Memory Implementation
Balancer traverse (Balancer b) { while(!b.isLeaf()) { boolean toggle = b.flip(); if (toggle) b = b.next[0] else b = b.next[1] return b; } 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

59 Message-Passing Implementation
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

60 The Bitonic Counting Network
Bitonic[2k] inductive construction: x 1 2 3 4 5 6 7 Merger[2k] y 1 2 3 4 5 6 7 Bitonic[k] Bitonic[k] Merger[2k] receives two width k sequences that have the step property and outputs a width 2k sequence that has the step property. Need only show how to construct Merger[2k]. 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

61 Merger[2k] even odd y0 x y1 y2 y3 y2k-2 y2k-1 Merger[k] z … 5-Apr-19
1 2 3 4 5 6 K-1 Merger[k] even odd z y0 y1 y2 y3 y2k-2 y2k-1 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

62 Merger[4] Bitonic[2] Merger[4] even odd odd even 5-Apr-19
y0 even x 1 2 3 4 5 6 K-1 z 1 2 3 4 5 6 K-1 y1 y2 Merger[k] y3 odd x 1 2 3 4 5 6 K-1 odd z 1 2 3 4 5 6 K-1 Merger[k] even y2k-2 y2k-1 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

63 Merger[8] Merger[8] even x 1 2 3 4 5 6 K-1 z 1 2 3 4 5 6 K-1 z0 x 1 2 3 4 5 6 7 y 1 2 3 4 5 6 7 z1 Merger[k] z2 Bitonic[k] z3 odd x 1 2 3 4 5 6 K-1 odd z 1 2 3 4 5 6 K-1 Merger[k] Bitonic[k] even z2k-2 z2k-1 Theorem: in any quiescent state, the outputs of Bitonic[w] have the step property. 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

64 Depth of the Bitonic Network
even x 1 2 3 4 5 6 K-1 x 1 2 3 4 5 6 K-1 z 1 2 3 4 5 6 K-1 y0 Bitonic[k] y1 y2 Merger[k] y3 odd Width w x 1 2 3 4 5 6 K-1 x 1 2 3 4 5 6 K-1 odd z 1 2 3 4 5 6 K-1 Bitonic[k] Merger[k] even y2k-2 y2k-1 Every layer adds a row of balancers so depth is log 2 + log 4 + … + log w/2 + log w = (log w)(log w + 1)/2 = O(log2 w) 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

65 A Lower Bound on Depth A Sorting Network is a synchronous
7 3 1 4 3 7 1 4 3 1 7 4 1 3 4 7 A Sorting Network is a synchronous network of comparators 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

66 Counting Implies Sorting
Theorem: if a balancing network counts, then its isomorphic comparison network sorts, but not vice versa. Proof: (of vice versa) Odd-even sorting network Doesn’t have the step property 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

67 Counting Implies Sorting
Proof: (if it counts it sorts) Lemma [0-1 Principle]: a comparison network that sorts all sequences of 0’s and 1’s is a sorting network. 1 1 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

68 Lower Bound on Depth Theorem: The depth of any width w sorting network is at least W(log w). Corollary: The depth of any width w counting network is at least W(log w). Theorem: there exists a counting network of q(log w) depth. Unfortunately…non-constructive and contstants in the 1000s. 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

69 Merger[2k] Merger[2k] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

70 Merger[2k] Schematic Merger[k] 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

71 Merger[2k] Layout 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

72 Merger[2k] Schematic Merger[k] Merger[k] Bitonic[k] Bitonic[k] even
odd Bitonic[k] odd Merger[k] even 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

73 Outputs from Bitonic[k]
Odd-Even has at most one more Proof Outline even odd odd even Outputs from Bitonic[k] Inputs to Merger[k] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

74 Proof Outline even odd odd even Inputs to Merger[k]
Outputs of Merger[k] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

75 Proof Outline Outputs of Merger[k] Outputs of last layer 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

76 Unfolded Bitonic Network
Merger[8] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

77 Unfolded Bitonic Network
Merger[4] Merger[4] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

78 Unfolded Bitonic Network
Merger[2] Merger[2] Merger[2] Merger[2] 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

79 Bitonic[k] Depth Width k Depth is (log2 k)(log2 k + 1)/2 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

80 The Periodic Network 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

81 Bitonic[k] is not Linearizable
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

82 Bitonic[k] is not Linearizable
5-Apr-19 M. Herlihy & N. Shavit (c) 2003

83 Bitonic[k] is not Linearizable
2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

84 Bitonic[k] is not Linearizable
2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

85 Bitonic[k] is not Linearizable
Problem is: Red finished before Yellow started Red took 2 Yellow took 0 2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

86 Network Depth Each block[k] has depth log2 k Need log2 k blocks
Grand total of (log2 k)2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

87 Lower Bound on Depth Theorem: The depth of any width w counting network is at least Ω(log w). Theorem: there exists a counting network of θ(log w) depth. Unfortunately, proof is non-constructive and constants in the 1000s. 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

88 Sequential Theorem If a balancing network counts Then it counts
Sequentially, meaning that Tokens traverse one at a time Then it counts Even if tokens traverse concurrently 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

89 Red First, Blue Second 5-Apr-19 M. Herlihy & N. Shavit (c) 2003 (2)

90 Blue First, Red Second 5-Apr-19 M. Herlihy & N. Shavit (c) 2003 (2)

91 Either Way Same balancer states 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

92 Same output distribution
Order Doesn’t Matter Same output distribution Same balancer states 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

93 Index Distribution Benchmark
void indexBench(int iters, int work) { while (int i = 0 < iters) { i = fetch&inc(); Thread.sleep(random() % work); } 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

94 Performance (Simulated)
Higher is better! Throughput Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors * All graphs taken from Herlihy,Lim,Shavit, copyright ACM. 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

95 Performance (Simulated)
64-leaf combining tree 80-balancer counting network Throughput Higher is better! Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

96 Performance (Simulated)
64-leaf combining tree 80-balancer counting network Combining and counting are pretty close Throughput Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

97 Performance (Simulated)
64-leaf combining tree 80-balancer counting network But they beat the hell out of the competition! Throughput Counting benchmark: Ignore startup and winddown times Spin lock is best at low concurrenncy, drops off rapidly MCS queue lock Spin lock Number processors 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

98 Saturation and Performance
Undersaturated P < w log w Optimal performance Saturated P = w log w Oversaturated P > w log w 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

99 Throughput vs. Size Throughput Number processors Bitonic[16]
The simulation was extended to 80 processors, one balancer per processor in The 16 wide network, and as can be seen the counting network scales… Number processors 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

100 Adding Networks Combining trees generalize
Fetch&add Add any number, not just 1 What about counting networks? 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

101 Bad News If an adding network Then every token must traverse
Supports n concurrent tokens Then every token must traverse At least n-1 balancers In sequential executions 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

102 Uh-Oh Adding network size depends on n High latency
Like combining trees Unlike counting networks High latency Depth linear in n Not logarithmic in w 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

103 Generic Counting Network
+1 +2 +2 2 +2 2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

104 First token would visit green balancers if it runs solo
+1 +2 First token would visit green balancers if it runs solo +2 +2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

105 Claim Look at path of +1 token
All other +2 tokens must visit some balancer on +1 token’s path 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

106 Second Token Takes 0 +1 +2 +2 +2 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

107 They can’t both take zero!
Second Token +1 Takes 0 +2 +2 Takes 0 +2 They can’t both take zero! 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

108 If Second avoid First’s Path
Second token Doesn’t observe first First hasn’t run Chooses 0 First token Doesn’t observe second Disjoint paths 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

109 If Second avoids First’s Path
Because +1 token chooses 0 It must be ordered first So +2 token ordered second So +2 token should return 1 Something’s wrong! 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

110 Halt blue token before first green balancer
Second Token +1 +2 +2 +2 Halt blue token before first green balancer 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

111 Third Token Takes 0 or 2 +1 +2 +2 +2 5-Apr-19
M. Herlihy & N. Shavit (c) 2003

112 They can’t both take zero, and they can’t take 0 and 2!
Third Token +1 Takes 0 +2 They can’t both take zero, and they can’t take 0 and 2! +2 +2 Takes 0 or 2 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

113 First,Second, & Third Tokens must be Ordered
Did not observe +1 token May have observed earlier +2 token Takes an even number 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

114 First,Second & Third Tokens must be Ordered
Because +1 token’s path is disjoint It chooses 0 Ordered first Rest take odd numbers But last token takes an even number Something’s wrong! 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

115 Halt blue token before first green balancer
Third Token +1 +2 +2 +2 Halt blue token before first green balancer 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

116 Continuing in this way We can “park” a token
In front of a balancer That token #1 will visit There are n-1 other tokens Two wires per balancer Path includes n-1 balancers! 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

117 Theorem In any adding network Same arguments apply to
In sequential executions Tokens traverse at least n-1 balancers Same arguments apply to Linearizable counting networks Multiplying networks And others 5-Apr-19 M. Herlihy & N. Shavit (c) 2003

118 Implication QED Counting networks with Give
Tokens (+1) Anti-tokens (-1) Give Highly concurrent Low contention fetch&inc + fetch&dec registers QED 5-Apr-19 M. Herlihy & N. Shavit (c) 2003


Download ppt "Pools, Counters, and Parallelism"

Similar presentations


Ads by Google