CSC 580 - Multiprocessor Programming, Spring, 2011 Outline for Chapter 5 – Building Blocks – Library Classes, Dr. Dale E. Parson, week 5.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
Concurrency (p2) synchronized (this) { doLecture(part2); } synchronized (this) { doLecture(part2); }
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
CSC Multiprocessor Programming, Spring, 2011 Outline for Chapter 6 – Task Execution Dr. Dale E. Parson, week 7.
Concurrency 101 Shared state. Part 1: General Concepts 2.
1 Java threads: synchronization. 2 Thread states 1.New: created with the new operator (not yet started) 2.Runnable: either running or ready to run 3.Blocked:
Lecture8 – Ch 5 Higher-level concurrency patterns Synchronized collections Concurrent collections Producer-consumer pattern  Serial thread confinement.
©SoftMoore ConsultingSlide 1 Appendix D: Java Threads "The real payoff of concurrent execution arises not from the fact that applications can be speeded.
Queues Chapter 6. Chapter Objectives  To learn how to represent a waiting line (queue) and how to use the methods in the Queue interface for insertion.
Process Description and Control
OS2-1 Chapter 2 Computer System Structures. OS2-2 Outlines Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection.
1 L43 Collections (3). 2 OBJECTIVES  To use the collections framework interfaces to program with collections polymorphically.  To use iterators to “walk.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
3.5 Interprocess Communication
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved L21 (Chapter 24) Multithreading.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Queues Chapter 6. Chapter 6: Queues2 Chapter Objectives To learn how to represent a waiting line (queue) and how to use the methods in the Queue interface.
Java 5 Threading CSE301 University of Sunderland Harry Erwin, PhD.
1 I/O Management in Representative Operating Systems.
1 Thread Pools. 2 What’s A Thread Pool? A programming technique which we will use. A collection of threads that are created once (e.g. when server starts).
Week 9 Building blocks.
Synchronization (Barriers) Parallel Processing (CS453)
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
COMP 121 Week 14: Queues. Objectives Learn how to represent a queue Learn how to use the methods in the Queue interface Understand how to implement the.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Concurrency Patterns Emery Berger and Mark Corner University.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Java Threads 1 1 Threading and Concurrent Programming in Java Queues D.W. Denbo.
REVIEW OF COMMONLY USED DATA STRUCTURES IN OS. NEEDS FOR EFFICIENT DATA STRUCTURE Storage complexity & Computation complexity matter Consider the problem.
Multi-Threaded Programming Design CSCI 201L Jeffrey Miller, Ph.D. HTTP :// WWW - SCF. USC. EDU /~ CSCI 201 USC CSCI 201L.
Practice Session 8 Blocking queues Producers-Consumers pattern Semaphore Futures and Callables Advanced Thread Synchronization Methods CountDownLatch Thread.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
1 CSCI 6900: Design, Implementation, and Verification of Concurrent Software Eileen Kraemer August 24 th, 2010 The University of Georgia.
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem.
CSC Multiprocessor Programming, Spring, 2012 Chapter 7 – Cancellation & Shutdown Dr. Dale E. Parson, week 9-10.
CSC Multiprocessor Programming, Spring, 2012 Chapter 11 – Performance and Scalability Dr. Dale E. Parson, week 12.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Concurrency & Dynamic Programming.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
CSC Multiprocessor Programming, Spring, 2011 Chapter 9 – GUI Applications Dr. Dale E. Parson, week 11.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
CSC Multiprocessor Programming, Spring, 2012 Chapter 8 – Applying Thread Pools Dr. Dale E. Parson, week 10.
4.1 Introduction to Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads.
CSC Multiprocessor Programming, Spring, 2012 Outline for Chapter 4 – Composing Objects – thread-safe object-oriented composition, Dr. Dale E. Parson,
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
CSC Multiprocessor Programming, Spring, 2012 Chapter 10 – Avoiding Liveness Hazards Dr. Dale E. Parson, week 11.
CSC Multiprocessor Programming, Spring, 2012 Chapter 12 – Testing Concurrent Programs Dr. Dale E. Parson, week 12.
Advanced Programming Concurrency and Threads Advanced Programming. All slides copyright: Chetan Arora.
Java.util.concurrent package. concurrency utilities packages provide a powerful, extensible framework of high-performance threading utilities such as.
Module 12: I/O Systems I/O hardware Application I/O Interface
Practice Session 8 Lockfree LinkedList Blocking queues
CSC Multiprocessor Programming, Spring, 2012
Chapter 4: Multithreaded Programming
CSC Multiprocessor Programming, Spring, 2011
week 1 - Introduction Goals
Operating System Concepts
13: I/O Systems I/O hardwared Application I/O Interface
CS703 - Advanced Operating Systems
Process Description and Control
Semaphores Chapter 6.
Process Description and Control
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Software Engineering and Architecture
CSC Multiprocessor Programming, Spring, 2011
Module 12: I/O Systems I/O hardwared Application I/O Interface
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

CSC Multiprocessor Programming, Spring, 2011 Outline for Chapter 5 – Building Blocks – Library Classes, Dr. Dale E. Parson, week 5

Synchronized collections Hashtable, Vector in java.util Collections.synchronized* adapters Entire collection is locked. Locks do not cover compound operations. Iterators Navigation over elements Conditional mutation Synchronized collections support client-side locking because they use the intrinsic lock on the collection object itself. See Listings 5.2 to 5.4.

Iterator limitations ConcurrentModificationException if mutation occurs during iterator usage. Locking during iteration can lead to performance bottlenecks or deadlock. Cloning the collection is an (expensive) alternative.

Concurrent collections java.util.concurrent Unlike synchronized collections, which lock an entire collection, the concurrent collections use data structures designed for concurrent access by multiple threads. Scalability improves. Design may simplify. Compare Queue.remove() on an empty queue with LinkedBlockingQueue.take().

ConcurrentHashMap ConcurrentHashMap – See Graph 7 in Summer of Uses lock striping to allow concurrent access to disjoint sets of buckets in the hash table. Also uses non-blocking retrieval. Uses weakly consistent iterators that may return “before” or “after” sequences of elements during concurrent mutation instead of an exception. No direct means for locking an entire map. Lock i locks a bucket’s linked list when (bucketNumber % lockCount) == i How might a table such as this maintain table- wide data fields such as number of elements?

Copy-on-write – immutable collections CopyOnWriteArrayList CopyOnWriteArraySet Every mutation clones a fresh copy of an underlying array. Iterators see a snapshot not subject to mutation. Useful when traversal / lookup exceed mutation. Best to construct these objects from other initialized collection objects if possible, to avoid array copy costs on every initial element insertion.

Blocking queues – serial thread confinement for object ownership ArrayBlockingQueue, LinkedBlockingQueue unbounded or bounded capacity options offer() is a non-blocking add() that returns boolean take() blocks for a value unless interrupted Producer-consumer / pipelined / dataflow Packet streams, audio / video streams, telephony voice channels in digital signal processors, etc. Process -> buffer -> process -> buffer, mix, distribute data flows. PriorityBlockingQueue for ordered elements SynchronousQueue has zero internal buffering. LinkedBlockingDeque supports work stealing from the back.

Other concurrent collections ConcurrentSkipListMap and ConcurrentSkipListSet are O(log(n)) per operation alternatives to balanced trees that do not require global locking. See Graph 6. Access is distributed over hierarchical lists. ConcurrentHashMap is ideally O(1) per operation. ConcurrentLinkedQueue has unbounded capacity, does not block, poll() returns null when the queue is empty.

Interruption See interrupt* methods in java.lang.Thread Java interruption is cooperative, and occurs only at well defined synchronization points. It is not asynchronous like a hardware interrupt. If used, there must be an interruption policy that is part of the application framework architecture. When are they sent, what do they mean, how must they be handled?

Guidelines on handling interrupts Blocking waits, I/O and blocking container classes can throw InterruptedException. Use according to application architecture. Cancellation policy covered later. Propagate to the caller. Restore the interrupt so it can be re-raised elsewhere in the execution sequence. Ignore if the application does not use it.

Latches and Futures CountDownLatch blocks waiting threads until counter reaches zero. It cannot be reused. Block until all resources are initialized. FutureTask schedules a Callable to run later. Implements interface Future. Blocking or nonblocking get() re-synchronize. May be cancelled, polled, interrupted (with exception). Client may start a set of Futures, wait for completion. Futures are useful in waiting for results from distributed computations.

Semaphores and Barriers Semaphore is roughly a nonreentrant counting lock. Useful for metering out blocking bounded resources. Underlying resource pool may reside in non-blocking container. CyclicBarrier await()s until all threads in a pool have await()ed for the barrier. Useful for “two-stroke” or “multi-stroke” architectures, where all worker threads must complete an aggregate task at time Ti before any initiates work at time Ti+1. Cellular automata, simulations, genetic algorithms. Two alternating sets of state variables allows one set to be read, other to be written without additional locking.

Memoization Uses FutureTask to accomplish lazy evaluation. Delays running expensive computations until needed. Caches functional result of the computation – the result is always the same for a given set of parameter values. See Listings 5.19 and From the client’s perspective, the cache maps an input parameter value to an output parameter value. The Memoizer uses ConcurrentMap to map an input parameter to a Future. The first time get() is called, it waits for the result. Thereafter the result is buffered in the completed FutureTask.

Summary of Part I See textbook page 110.