Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSC 580 - Multiprocessor Programming, Spring, 2011 Outline for Chapter 5 – Building Blocks – Library Classes, Dr. Dale E. Parson, week 5.

Similar presentations


Presentation on theme: "CSC 580 - Multiprocessor Programming, Spring, 2011 Outline for Chapter 5 – Building Blocks – Library Classes, Dr. Dale E. Parson, week 5."— Presentation transcript:

1 CSC 580 - Multiprocessor Programming, Spring, 2011 Outline for Chapter 5 – Building Blocks – Library Classes, Dr. Dale E. Parson, week 5

2 Synchronized collections Hashtable, Vector in java.util Collections.synchronized* adapters Entire collection is locked. Locks do not cover compound operations. Iterators Navigation over elements Conditional mutation Synchronized collections support client-side locking because they use the intrinsic lock on the collection object itself. See Listings 5.2 to 5.4.

3 Iterator limitations ConcurrentModificationException if mutation occurs during iterator usage. Locking during iteration can lead to performance bottlenecks or deadlock. Cloning the collection is an (expensive) alternative.

4 Concurrent collections java.util.concurrent Unlike synchronized collections, which lock an entire collection, the concurrent collections use data structures designed for concurrent access by multiple threads. Scalability improves. Design may simplify. Compare Queue.remove() on an empty queue with LinkedBlockingQueue.take().

5 ConcurrentHashMap ConcurrentHashMap – See Graph 7 in Summer of 2010. Uses lock striping to allow concurrent access to disjoint sets of buckets in the hash table. Also uses non-blocking retrieval. Uses weakly consistent iterators that may return “before” or “after” sequences of elements during concurrent mutation instead of an exception. No direct means for locking an entire map. Lock i locks a bucket’s linked list when (bucketNumber % lockCount) == i How might a table such as this maintain table- wide data fields such as number of elements?

6 Copy-on-write – immutable collections CopyOnWriteArrayList CopyOnWriteArraySet Every mutation clones a fresh copy of an underlying array. Iterators see a snapshot not subject to mutation. Useful when traversal / lookup exceed mutation. Best to construct these objects from other initialized collection objects if possible, to avoid array copy costs on every initial element insertion.

7 Blocking queues – serial thread confinement for object ownership ArrayBlockingQueue, LinkedBlockingQueue unbounded or bounded capacity options offer() is a non-blocking add() that returns boolean take() blocks for a value unless interrupted Producer-consumer / pipelined / dataflow Packet streams, audio / video streams, telephony voice channels in digital signal processors, etc. Process -> buffer -> process -> buffer, mix, distribute data flows. PriorityBlockingQueue for ordered elements SynchronousQueue has zero internal buffering. LinkedBlockingDeque supports work stealing from the back.

8 Other concurrent collections ConcurrentSkipListMap and ConcurrentSkipListSet are O(log(n)) per operation alternatives to balanced trees that do not require global locking. See Graph 6. Access is distributed over hierarchical lists. ConcurrentHashMap is ideally O(1) per operation. ConcurrentLinkedQueue has unbounded capacity, does not block, poll() returns null when the queue is empty.

9 Interruption See interrupt* methods in java.lang.Thread Java interruption is cooperative, and occurs only at well defined synchronization points. It is not asynchronous like a hardware interrupt. If used, there must be an interruption policy that is part of the application framework architecture. When are they sent, what do they mean, how must they be handled?

10 Guidelines on handling interrupts Blocking waits, I/O and blocking container classes can throw InterruptedException. Use according to application architecture. Cancellation policy covered later. Propagate to the caller. Restore the interrupt so it can be re-raised elsewhere in the execution sequence. Ignore if the application does not use it.

11 Latches and Futures CountDownLatch blocks waiting threads until counter reaches zero. It cannot be reused. Block until all resources are initialized. FutureTask schedules a Callable to run later. Implements interface Future. Blocking or nonblocking get() re-synchronize. May be cancelled, polled, interrupted (with exception). Client may start a set of Futures, wait for completion. Futures are useful in waiting for results from distributed computations.

12 Semaphores and Barriers Semaphore is roughly a nonreentrant counting lock. Useful for metering out blocking bounded resources. Underlying resource pool may reside in non-blocking container. CyclicBarrier await()s until all threads in a pool have await()ed for the barrier. Useful for “two-stroke” or “multi-stroke” architectures, where all worker threads must complete an aggregate task at time Ti before any initiates work at time Ti+1. Cellular automata, simulations, genetic algorithms. Two alternating sets of state variables allows one set to be read, other to be written without additional locking.

13 Memoization Uses FutureTask to accomplish lazy evaluation. Delays running expensive computations until needed. Caches functional result of the computation – the result is always the same for a given set of parameter values. See Listings 5.19 and 5.20. From the client’s perspective, the cache maps an input parameter value to an output parameter value. The Memoizer uses ConcurrentMap to map an input parameter to a Future. The first time get() is called, it waits for the result. Thereafter the result is buffered in the completed FutureTask.

14 Summary of Part I See textbook page 110.


Download ppt "CSC 580 - Multiprocessor Programming, Spring, 2011 Outline for Chapter 5 – Building Blocks – Library Classes, Dr. Dale E. Parson, week 5."

Similar presentations


Ads by Google