Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrency CS 510: Programming Languages David Walker.

Similar presentations


Presentation on theme: "Concurrency CS 510: Programming Languages David Walker."— Presentation transcript:

1 Concurrency CS 510: Programming Languages David Walker

2 Concurrent PL Many modern applications are structured as a collection of concurrent, cooperating components Different parts of the concurrent program may be run in parallel, resulting in a performance improvement Performance is only one reason for writing concurrent programs Concurrency is also a useful structuring device for programs  A thread encapsulates a unit of state and control

3 Concurrent Applications Interactive systems user interfaces, window managers, Microsoft PowerPoint, etc. Reactive systems the program responds to the environment signal processors, dataflow networks, node programs Characteristics multiple input streams, low latency is crucial, multiple distinct tasks operate simultaneously

4 Concurrent PL Concurrent PL incorporate three kinds of mechanisms: A means to introduce new threads  a thread = a unit of state + control  either static or dynamic thread creation Synchronization primitives  coordinates independent threads  reduces nondeterminism in the order of computations A communication mechanism  shared memory or message-passing

5 Threads A thread = state + control control = an instruction stream (program counter) state = local variables, stack, possibly local heap Static thread creation p1 | p2 |... | pn creates n threads at runtime Dynamic thread creation spawn/fork create arbitrarily many threads

6 Interference & Synchronization Multiple threads normally share some information As soon as there is any sharing, the order of execution of multiple threads can alter the meaning of programs Synchronization required to avoid interference let val x = ref 0 in (x := !x + 1) | (x := !x + 2); print (!x) end

7 Interference & Synchronization To prove sequential programs correct we need to worry about: whether they terminate what output they produce To prove concurrent programs correct we need to worry about: proving the sequential parts correct whether concurrent parts are properly synchronized so they make progress  no deadlock, no livelock, no starvation

8 Interference & Synchronization A program is deadlocked if every thread requires some resource (state) held by another thread so no thread can make progress  threads are too greedy and hoard resources livelocked if every thread consistently gives up its resources, and so none make progress  no, you first; no, you first; NO! You first;... A thread is starved if it never acquires the resources it needs

9 Interference & Synchronization Characterizing program properties Safety:  A program never enters a bad state  type safety properties  absence of deadlock & mutual exclusion Liveness  Eventually, a program enters a good state  termination properties  fairness properties (absence of starvation) Proper synchronization necessary for safety and liveness

10 Communication Mechanisms Shared-memory languages threads interact through shared state type ‘a buffer val buffer : unit -> ‘a buffer val insert : (‘a * ‘a buffer) -> unit val remove : ‘a buffer -> ‘a producer buffer consumer interface:

11 Communication Mechanisms Synchronization and shared-memory Mutual exclusion locks  Before accessing shared data, the lock for that data must be acquired  When access is complete, lock is released  Modula-3, Java,...  A semaphore is basically a fancy lock Monitors  A module that encapsulates shared state  Only one thread can be active inside the monitor at a time  Pascal (?), Turing

12 Communication Mechanisms Message-passing threads interact by sending and receiving messages Causality principle: send occurs before receive  Result: synchronization occurs through message passing send receive thread 1thread 2

13 Communication Mechanisms Communication channels The sender must know where to send the message The receiver must know where to listen for the message A channel encapsulates the source and destination of messages  one-to-one (unicast channels)  one-to-many (broadcast channels)  typed channels  send-only (for output devices), read-only channels (for input devices)

14 Communication Mechanisms Synchronous (blocking) operations sending thread waits until receiver receives the message send receive thread 1thread 2 resume execution

15 Communication Mechanisms Synchronous (blocking) operations receiver may also block until it receives the message send receive thread 1thread 2 resume execution resume execution

16 Communication Mechanisms Asynchronous (nonblocking) operations asynchronous send: sender continues execution immediately after sending message send receive thread 1thread 2 resume execution

17 Communication Mechanisms Asynchronous (nonblocking) operations One disadvantage of asynchronous send is that we need a message buffer to hold outgoing messages that have not yet been received  When the buffer fills, the send becomes blocking Receive is (almost) never asynchronous  normally, you need the data you are receiving to proceed with the computation

18 Communication Mechanisms Remote Procedure Call client/server model: RPC involves two message sends, one to request service and one to return result client server f (x) server data rpc(f,x,server)

19 Concurrent ML Extension to ML with concurrency primitives Thread creation  dynamic thread creation through spawn Synchronization mechanisms  a variety of different sorts of “events” Communication mechanisms  asynchronous and synchronous operations  shared mutable state


Download ppt "Concurrency CS 510: Programming Languages David Walker."

Similar presentations


Ads by Google