Monitor Object Pattern E81 CSE 532S: Advanced Multi-Paradigm Software Development Concurrency Patterns and the Monitor Object Pattern Chris Gill, Venkita Subramonian, Olcan Sercinoglu, Thomas Shepherd, Jim Luo Department of Computer Science and Engineering Washington University, St. Louis cdgill@cse.wustl.edu Title slide
Concurrency Patterns Key issue: sharing resources across threads Thread Specific Storage Pattern Separates resource access to avoid contention among them Monitor Object Pattern One thread at a time can access the object’s resources Active Object Pattern One worker thread owns the object‘s resources Half-Sync/Half-Async (HSHA) Pattern A thread collects asynchronous requests and works on the requests synchronously (similar to Active Object) Leader/Followers Pattern Optimize HSHA for independent messages/threads
Design Forces Need to prevent race conditions but allow incremental progress of threads within the object Methods define (intuitive) synchronization boundaries But, may need to check state etc. before proceeding A refinement: only one method at a time is active within object Synchronization must be transparent to caller Client must not be concerned with synchronization The executing method must be able to give up control Other clients can access the object Allows “controlled” concurrency Object must be left “stable” during control transitions Race conditions: uncontrolled concurrent changes “Controlled” concurrency is one where internals are protected and deadlock is prevented Stable: object-specific invariants must hold
Desired Internal Behavior notify Queue empty - wait Queue
Solution: (Passive) Monitor Object Make object a “Monitor Object” Methods run in callers’ threads Condition variables arbitrate use of a common shared lock E.g., using a std::mutex, a std::unique_lock (must be able to unlock and re-lock it) and a std::condition_variable Ensures incremental progress while avoiding race conditions Thread waits on condition Condition variable performs thread-safe lock/sleep and unlock/release operations Thread released when it can proceed E.g., when queue isn’t empty/full Blocks caller until request can be handled, coordinates callers Client Proxy List (Monitor Object) add() Condition lookup() Transparent in the sense that the caller does not need to deal with external synchronization objects. Easy implementation is a big plus. Lock
A Few Variations Can notify_one() or notify_all() If it doesn’t matter which thread, just wake one up If all need to see if it’s their turn, wake all of them Can limit waiting time Use wait_for() to return after a specified interval Use wait_until() to return at a specified time Can pass a predicate to wait (or wait_*) method(s) Won’t return until predicate is satisfied (or call times out) Helps to avoid spurious wake cases