Presentation is loading. Please wait.

Presentation is loading. Please wait.

Art of Multiprocessor Programming1 Futures, Scheduling, and Work Distribution Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy.

Similar presentations


Presentation on theme: "Art of Multiprocessor Programming1 Futures, Scheduling, and Work Distribution Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy."— Presentation transcript:

1 Art of Multiprocessor Programming1 Futures, Scheduling, and Work Distribution Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for CIS 640 University of Pennsylvania

2 Art of Multiprocessor Programming22 How to write Parallel Apps? How to –split a program into parallel parts –In an effective way –Thread management

3 Art of Multiprocessor Programming33 Matrix Multiplication

4 Art of Multiprocessor Programming44 Matrix Multiplication c ij =  k=0 N-1 a ki * b jk

5 Art of Multiprocessor Programming5 Matrix Multiplication class Worker extends Thread { int row, col; Worker(int row, int col) { this.row = row; this.col = col; } public void run() { double dotProduct = 0.0; for (int i = 0; i < n; i++) dotProduct += a[row][i] * b[i][col]; c[row][col] = dotProduct; }}}

6 Art of Multiprocessor Programming6 Matrix Multiplication class Worker extends Thread { int row, col; Worker(int row, int col) { this.row = row; this.col = col; } public void run() { double dotProduct = 0.0; for (int i = 0; i < n; i++) dotProduct += a[row][i] * b[i][col]; c[row][col] = dotProduct; }}} a thread

7 Art of Multiprocessor Programming7 Matrix Multiplication class Worker extends Thread { int row, col; Worker(int row, int col) { this.row = row; this.col = col; } public void run() { double dotProduct = 0.0; for (int i = 0; i < n; i++) dotProduct += a[row][i] * b[i][col]; c[row][col] = dotProduct; }}} Which matrix entry to compute

8 Art of Multiprocessor Programming8 Matrix Multiplication class Worker extends Thread { int row, col; Worker(int row, int col) { this.row = row; this.col = col; } public void run() { double dotProduct = 0.0; for (int i = 0; i < n; i++) dotProduct += a[row][i] * b[i][col]; c[row][col] = dotProduct; }}} Actual computation

9 Art of Multiprocessor Programming9 Matrix Multiplication void multiply() { Worker[][] worker = new Worker[n][n]; for (int row …) for (int col …) worker[row][col] = new Worker(row,col); for (int row …) for (int col …) worker[row][col].start(); for (int row …) for (int col …) worker[row][col].join(); }

10 Art of Multiprocessor Programming10 Matrix Multiplication void multiply() { Worker[][] worker = new Worker[n][n]; for (int row …) for (int col …) worker[row][col] = new Worker(row,col); for (int row …) for (int col …) worker[row][col].start(); for (int row …) for (int col …) worker[row][col].join(); } Create n x n threads

11 Art of Multiprocessor Programming11 Matrix Multiplication void multiply() { Worker[][] worker = new Worker[n][n]; for (int row …) for (int col …) worker[row][col] = new Worker(row,col); for (int row …) for (int col …) worker[row][col].start(); for (int row …) for (int col …) worker[row][col].join(); } Start them

12 Art of Multiprocessor Programming12 Matrix Multiplication void multiply() { Worker[][] worker = new Worker[n][n]; for (int row …) for (int col …) worker[row][col] = new Worker(row,col); for (int row …) for (int col …) worker[row][col].start(); for (int row …) for (int col …) worker[row][col].join(); } Wait for them to finish Start them

13 Art of Multiprocessor Programming13 Matrix Multiplication void multiply() { Worker[][] worker = new Worker[n][n]; for (int row …) for (int col …) worker[row][col] = new Worker(row,col); for (int row …) for (int col …) worker[row][col].start(); for (int row …) for (int col …) worker[row][col].join(); } Wait for them to finish Start them What’s wrong with this picture?

14 Art of Multiprocessor Programming14 Thread Overhead Threads Require resources –Memory for stacks –Setup, teardown Scheduler overhead Worse for short-lived threads

15 Art of Multiprocessor Programming15 Thread Pools More sensible to keep a pool of long- lived threads Threads assigned short-lived tasks –Runs the task –Rejoins pool –Waits for next assignment

16 Art of Multiprocessor Programming16 Thread Pool = Abstraction Insulate programmer from platform –Big machine, big pool –And vice-versa Portable code –Runs well on any platform –No need to mix algorithm/platform concerns

17 Art of Multiprocessor Programming17 ExecutorService Interface In java.util.concurrent –Task = Runnable object If no result value expected Calls run() method. –Task = Callable object If result value of type T expected Calls T call() method.

18 Art of Multiprocessor Programming18 Future Callable task = …; … Future future = executor.submit(task); … T value = future.get();

19 Art of Multiprocessor Programming19 Future Callable task = …; … Future future = executor.submit(task); … T value = future.get(); Submitting a Callable task returns a Future object

20 Art of Multiprocessor Programming20 Future Callable task = …; … Future future = executor.submit(task); … T value = future.get(); The Future’s get() method blocks until the value is available

21 Art of Multiprocessor Programming21 Future Runnable task = …; … Future future = executor.submit(task); … future.get();

22 Art of Multiprocessor Programming22 Future Runnable task = …; … Future future = executor.submit(task); … future.get(); Submitting a Runnable task returns a Future object

23 Art of Multiprocessor Programming23 Future Runnable task = …; … Future future = executor.submit(task); … future.get(); The Future’s get() method blocks until the computation is complete

24 Art of Multiprocessor Programming24 Matrix Addition

25 Art of Multiprocessor Programming25 Matrix Addition 4 parallel additions

26 Art of Multiprocessor Programming26 Matrix Addition Task class AddTask implements Runnable { Matrix a, b; // multiply this! public void run() { if (a.dim == 1) { c[0][0] = a[0][0] + b[0][0]; // base case } else { (partition a, b into half-size matrices a ij and b ij ) Future f 00 = exec.submit(add(a 00,b 00 )); … Future f 11 = exec.submit(add(a 11,b 11 )); f 00.get(); …; f 11.get(); … }}

27 Art of Multiprocessor Programming27 Matrix Addition Task class AddTask implements Runnable { Matrix a, b; // multiply this! public void run() { if (a.dim == 1) { c[0][0] = a[0][0] + b[0][0]; // base case } else { (partition a, b into half-size matrices a ij and b ij ) Future f 00 = exec.submit(add(a 00,b 00 )); … Future f 11 = exec.submit(add(a 11,b 11 )); f 00.get(); …; f 11.get(); … }} Base case: add directly

28 Art of Multiprocessor Programming28 Matrix Addition Task class AddTask implements Runnable { Matrix a, b; // multiply this! public void run() { if (a.dim == 1) { c[0][0] = a[0][0] + b[0][0]; // base case } else { (partition a, b into half-size matrices a ij and b ij ) Future f 00 = exec.submit(add(a 00,b 00 )); … Future f 11 = exec.submit(add(a 11,b 11 )); f 00.get(); …; f 11.get(); … }} Constant-time operation

29 Art of Multiprocessor Programming29 Matrix Addition Task class AddTask implements Runnable { Matrix a, b; // multiply this! public void run() { if (a.dim == 1) { c[0][0] = a[0][0] + b[0][0]; // base case } else { (partition a, b into half-size matrices a ij and b ij ) Future f 00 = exec.submit(add(a 00,b 00 )); … Future f 11 = exec.submit(add(a 11,b 11 )); f 00.get(); …; f 11.get(); … }} Submit 4 tasks

30 Art of Multiprocessor Programming30 Matrix Addition Task class AddTask implements Runnable { Matrix a, b; // multiply this! public void run() { if (a.dim == 1) { c[0][0] = a[0][0] + b[0][0]; // base case } else { (partition a, b into half-size matrices a ij and b ij ) Future f 00 = exec.submit(add(a 00,b 00 )); … Future f 11 = exec.submit(add(a 11,b 11 )); f 00.get(); …; f 11.get(); … }} Let them finish

31 Art of Multiprocessor Programming31 Dependencies Matrix example is not typical Tasks are independent –Don’t need results of one task … –To complete another Often tasks are not independent

32 Art of Multiprocessor Programming32 Fibonacci 1 if n = 0 or 1 F(n) F(n-1) + F(n-2) otherwise Note –potential parallelism –Dependencies

33 Art of Multiprocessor Programming33 Disclaimer This Fibonacci implementation is –Egregiously inefficient So don’t deploy it! –But illustrates our point How to deal with dependencies Exercise: –Make this implementation efficient!

34 Art of Multiprocessor Programming34 class FibTask implements Callable { static ExecutorService exec = Executors.newCachedThreadPool(); int arg; public FibTask(int n) { arg = n; } public Integer call() { if (arg > 2) { Future left = exec.submit(new FibTask(arg-1)); Future right = exec.submit(new FibTask(arg-2)); return left.get() + right.get(); } else { return 1; }}} Multithreaded Fibonacci

35 Art of Multiprocessor Programming35 class FibTask implements Callable { static ExecutorService exec = Executors.newCachedThreadPool(); int arg; public FibTask(int n) { arg = n; } public Integer call() { if (arg > 2) { Future left = exec.submit(new FibTask(arg-1)); Future right = exec.submit(new FibTask(arg-2)); return left.get() + right.get(); } else { return 1; }}} Multithreaded Fibonacci Parallel calls

36 Art of Multiprocessor Programming36 class FibTask implements Callable { static ExecutorService exec = Executors.newCachedThreadPool(); int arg; public FibTask(int n) { arg = n; } public Integer call() { if (arg > 2) { Future left = exec.submit(new FibTask(arg-1)); Future right = exec.submit(new FibTask(arg-2)); return left.get() + right.get(); } else { return 1; }}} Multithreaded Fibonacci Pick up & combine results

37 Art of Multiprocessor Programming37 Dynamic Behavior Multithreaded program is –A directed acyclic graph (DAG) –That unfolds dynamically Each node is –A single unit of work

38 Art of Multiprocessor Programming38 Fib DAG fib(4) fib(3)fib(2) submit get fib(2)fib(1)

39 Art of Multiprocessor Programming39 Arrows Reflect Dependencies fib(4) fib(3)fib(2) submit get fib(2)fib(1)

40 Art of Multiprocessor Programming40 How Parallel is That? Define work: –Total time on one processor Define critical-path length: –Longest dependency path –Can’t beat that!

41 Art of Multiprocessor Programming41 Fib Work fib(4)fib(3)fib(2) fib(1)

42 Art of Multiprocessor Programming42 Fib Work work is 17 123 847659 141013121115 16 17

43 Art of Multiprocessor Programming43 Fib Critical Path fib(4)

44 Art of Multiprocessor Programming44 Fib Critical Path fib(4) Critical path length is 8 18 27 364 5

45 Art of Multiprocessor Programming45 Notation Watch T P = time on P processors T 1 = work (time on 1 processor) T ∞ = critical path length (time on ∞ processors)

46 Art of Multiprocessor Programming46 Simple Bounds T P ≥ T 1 /P –In one step, can’t do more than P work T P ≥ T ∞ –Can’t beat infinite resources

47 Art of Multiprocessor Programming47 More Notation Watch Speedup on P processors –Ratio T 1 /T P –How much faster with P processors Linear speedup –T 1 /T P = Θ(P) Max speedup (average parallelism) –T 1 /T ∞

48 Art of Multiprocessor Programming48 Matrix Addition

49 Art of Multiprocessor Programming49 Matrix Addition 4 parallel additions

50 Art of Multiprocessor Programming50 Addition Let A P (n) be running time –For n x n matrix –on P processors For example –A 1 (n) is work –A ∞ (n) is critical path length

51 Art of Multiprocessor Programming51 Addition Work is A 1 (n) = 4 A 1 (n/2) + Θ(1) 4 spawned additions Partition, synch, etc

52 Art of Multiprocessor Programming52 Addition Work is A 1 (n) = 4 A 1 (n/2) + Θ(1) = Θ(n 2 ) Same as double-loop summation

53 Art of Multiprocessor Programming53 Addition Critical Path length is A ∞ (n) = A ∞ (n/2) + Θ(1) spawned additions in parallel Partition, synch, etc

54 Art of Multiprocessor Programming54 Addition Critical Path length is A ∞ (n) = A ∞ (n/2) + Θ(1) = Θ(log n)

55 Art of Multiprocessor Programming55 Matrix Multiplication Redux

56 Art of Multiprocessor Programming56 Matrix Multiplication Redux

57 Art of Multiprocessor Programming57 First Phase … 8 multiplications

58 Art of Multiprocessor Programming58 Second Phase … 4 additions

59 Art of Multiprocessor Programming59 Multiplication Work is M 1 (n) = 8 M 1 (n/2) + A 1 (n) 8 parallel multiplications Final addition

60 Art of Multiprocessor Programming60 Multiplication Work is M 1 (n) = 8 M 1 (n/2) + Θ(n 2 ) = Θ(n 3 ) Same as serial triple-nested loop

61 Art of Multiprocessor Programming61 Multiplication Critical path length is M ∞ (n) = M ∞ (n/2) + A ∞ (n) Half-size parallel multiplications Final addition

62 Art of Multiprocessor Programming62 Multiplication Critical path length is M ∞ (n) = M ∞ (n/2) + A ∞ (n) = M ∞ (n/2) + Θ(log n) = Θ(log 2 n)

63 Art of Multiprocessor Programming63 Parallelism M 1 (n)/ M ∞ (n) = Θ(n 3 /log 2 n) To multiply two 1000 x 1000 matrices –1000 3 /10 2 =10 7 Much more than number of processors on any real machine

64 Art of Multiprocessor Programming64 Work Distribution zzz…

65 Art of Multiprocessor Programming65 Work Dealing Yes!

66 Art of Multiprocessor Programming66 The Problem with Work Dealing D’oh!

67 Art of Multiprocessor Programming67 Work Stealing No work… zzz Yes!

68 Art of Multiprocessor Programming68 Lock-Free Work Stealing Each thread has a pool of ready work Remove work without synchronizing If you run out of work, steal someone else’s Choose victim at random

69 Art of Multiprocessor Programming69 Local Work Pools Each work pool is a Double-Ended Queue

70 Art of Multiprocessor Programming70 Work DEQueue 1 pushBottom popBottom work 1. Double-Ended Queue

71 Art of Multiprocessor Programming71 Obtain Work Obtain work Run task until Blocks or terminates popBottom

72 Art of Multiprocessor Programming72 New Work Unblock node Spawn node pushBottom

73 Art of Multiprocessor Programming73 Whatcha Gonna do When the Well Runs Dry? @&%$!! empty

74 Art of Multiprocessor Programming74 Steal Work from Others Pick random thread’s DEQeueue

75 Art of Multiprocessor Programming75 Steal this Task! popTop

76 Art of Multiprocessor Programming76 Task DEQueue Methods –pushBottom –popBottom –popTop Never happen concurrently

77 Art of Multiprocessor Programming77 Task DEQueue Methods –pushBottom –popBottom –popTop These most common – make them fast (minimize use of CAS)

78 Art of Multiprocessor Programming78 Ideal Wait-Free Linearizable Constant time

79 Art of Multiprocessor Programming79 Compromise Method popTop may fail if –Concurrent popTop succeeds, or a –Concurrent popBottom takes last work Blame the victim!

80 Art of Multiprocessor Programming80 Dreaded ABA Problem top CAS

81 Art of Multiprocessor Programming81 Dreaded ABA Problem top

82 Art of Multiprocessor Programming82 Dreaded ABA Problem top

83 Art of Multiprocessor Programming83 Dreaded ABA Problem top

84 Art of Multiprocessor Programming84 Dreaded ABA Problem top

85 Art of Multiprocessor Programming85 Dreaded ABA Problem top

86 Art of Multiprocessor Programming86 Dreaded ABA Problem top

87 Art of Multiprocessor Programming87 Dreaded ABA Problem top CAS Uh-Oh … Yes!

88 Art of Multiprocessor Programming88 Fix for Dreaded ABA stamp top bottom

89 Art of Multiprocessor Programming89 Bounded DEQueue public class BDEQueue { AtomicStampedReference top; volatile int bottom; Runnable[] tasks; … }

90 Art of Multiprocessor Programming90 Bounded DQueue public class BDEQueue { AtomicStampedReference top; volatile int bottom; Runnable[] tasks; … } Index & Stamp (synchronized)

91 Art of Multiprocessor Programming91 Bounded DEQueue public class BDEQueue { AtomicStampedReference top; volatile int bottom; Runnable[] deq; … } Index of bottom task no need to synchronize Do need memory barrier

92 Art of Multiprocessor Programming92 Bounded DEQueue public class BDEQueue { AtomicStampedReference top; volatile int bottom; Runnable[] tasks; … } Array holding tasks

93 Art of Multiprocessor Programming93 pushBottom() public class BDEQueue { … void pushBottom(Runnable r){ tasks[bottom] = r; bottom++; } … }

94 Art of Multiprocessor Programming94 pushBottom() public class BDEQueue { … void pushBottom(Runnable r){ tasks[bottom] = r; bottom++; } … } Bottom is the index to store the new task in the array

95 Art of Multiprocessor Programming95 pushBottom() public class BDEQueue { … void pushBottom(Runnable r){ tasks[bottom] = r; bottom++; } … } Adjust the bottom index stamp top bottom

96 Art of Multiprocessor Programming96 Steal Work public Runnable popTop() { int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = oldTop + 1; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom <= oldTop) return null; Runnable r = tasks[oldTop]; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; return null; }

97 Art of Multiprocessor Programming97 Steal Work public Runnable popTop() { int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = oldTop + 1; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom <= oldTop) return null; Runnable r = tasks[oldTop]; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; return null; } Read top (value & stamp)

98 Art of Multiprocessor Programming98 Steal Work public Runnable popTop() { int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = oldTop + 1; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom <= oldTop) return null; Runnable r = tasks[oldTop]; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; return null; } Compute new value & stamp

99 Art of Multiprocessor Programming99 Steal Work public Runnable popTop() { int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = oldTop + 1; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom <= oldTop) return null; Runnable r = tasks[oldTop]; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; return null; } Quit if queue is empty stamp top bottom

100 Art of Multiprocessor Programming100 Steal Work public Runnable popTop() { int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = oldTop + 1; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom <= oldTop) return null; Runnable r = tasks[oldTop]; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; return null; } Try to steal the task stamp top CAS bottom

101 Art of Multiprocessor Programming101 Steal Work public Runnable popTop() { int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = oldTop + 1; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom <= oldTop) return null; Runnable r = tasks[oldTop]; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; return null; } Give up if conflict occurs

102 Art of Multiprocessor Programming102 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } Take Work

103 Art of Multiprocessor Programming103 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } 103 Take Work Make sure queue is non-empty

104 Art of Multiprocessor Programming104 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } Take Work Prepare to grab bottom task

105 Art of Multiprocessor Programming105 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } Take Work Read top, & prepare new values

106 Art of Multiprocessor Programming106 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } return null; } 106 Take Work If top & bottom 1 or more apart, no conflict stamp top bottom

107 Art of Multiprocessor Programming107 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } 107 Take Work At most one item left stamp top bottom

108 Art of Multiprocessor Programming108 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } 108 Take Work Try to steal last task. Always reset bottom because the DEQueue will be empty even if unsuccessful (why?)

109 Art of Multiprocessor Programming109 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } Take Work stamp top CAS bottom I win CAS

110 Art of Multiprocessor Programming110 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } Take Work stamp top CAS bottom If I lose CAS Thief must have won…

111 Art of Multiprocessor Programming111 Runnable popBottom() { if (bottom == 0) return null; bottom--; Runnable r = tasks[bottom]; int[] stamp = new int[1]; int oldTop = top.get(stamp), newTop = 0; int oldStamp = stamp[0], newStamp = oldStamp + 1; if (bottom > oldTop) return r; if (bottom == oldTop){ bottom = 0; if (top.CAS(oldTop, newTop, oldStamp, newStamp)) return r; } top.set(newTop,newStamp); return null; } Take Work failed to get last task Must still reset top

112 Art of Multiprocessor Programming112 Variations Stealing is expensive –Pay CAS –Only one thread taken What if –Randomly balance loads?

113 Art of Multiprocessor Programming113 Work Stealing & Balancing Clean separation between app & scheduling layer Works well when number of processors fluctuates. Works on “black-box” operating systems


Download ppt "Art of Multiprocessor Programming1 Futures, Scheduling, and Work Distribution Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy."

Similar presentations


Ads by Google