Download presentation
Presentation is loading. Please wait.
Published byΆργος Κυπραίος Modified over 6 years ago
1
Transactional Memory Semaphores, monitors, and conditional critical regions all suffer from limitations based on lock semantics Naïve synchronization may be safe, but won’t scale Better performing approaches are necessarily more complex and risk hazards/deadlocks, whose absence is hard to prove Want to compose atomicity (locked code doesn’t compose) Transactional memory attempts to address this issue Similar to discrete event simulation: speculative execution Detect conflicts at run-time, arbitrate so one tx proceeds Other tx must abort and roll back to an earlier checkpoint Hopefully, most transactions can complete without rollback However, the extent to which that is true affects performance A key issue is that side-effects (I/O) may be hard to undo
2
Implicit Synchronization
Implicit synchronization is done when (part of) one program construct waits until (part of) another is done E.g., parallel for loops in various languages run “multi-phase” E.g., perform a set reads for all expressions right hand sides and then perform a set of writes to assign left hand sides … Futures (and promises and tasks, etc.) offer explicit language constructs for implicit synchronization In C++11, can use std::promise and std::future for asynchronous delivery of a result from one thread to another The program doesn’t say how to wait (encapsulated by the promise/future mechanism) only that result will appear later
3
And-Parallelism and Or-Parallelism
And-parallelism requires all results to be produced Parent thread (or process) waits for children to complete Often, uses their results to compute another result Or-parallelism requires only one result to be produced May take the first, or select from ones available at a deadline Raises important scheduling issues, e.g., liveness, fairness And-parallelism and or-parallelism are often important in non-imperative (e.g., logic, functional) languages E.g., in Lisp, and-parallel evaluation of sub-expressions E.g., in Prolog, or-parallelism and backtracking in resolution Other approaches used in non-imperative languages E.g., Erlang’s message passing between client, server code
4
Message Passing Implemented via two operations, send and receive
Gives another higher-level construct for concurrency control, this time based on similar semantics to distributed processes Also often implemented as objects (Active Object Pattern) Synchronization of sender and receiver may vary Sender may wait for receiver, or drop off message (buffered) Receiver may block or continue if no messages are available May also involve an intermediary (mailbox, event channel) and push/push, push/pull, pull/push, or pull/pull operations Ada task rendezvous as an example within a language An accept statement matches entries with code to run Separate caller (sender) and called (receiver) threads wait for each other (rendezvous) before continuing
5
Today’s Studio Exercises
We’ll code up ideas from Scott Chapter to 12.6 Again via C++11 concurrency/synchronization types/features Looking at additional concurrency/synchronization ideas Today’s exercises are again in C++ Please take advantage of the on-line tutorial and reference manual pages that are linked on the course web site The provided Makefile also may be helpful As always, please ask us for help as needed When done, send with your answers to the account, with subject line “Concurrency Studio IV”
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.