Concurrency CS 510: Programming Languages David Walker.

Slides:



Advertisements
Similar presentations
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Advertisements

Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Computer Systems/Operating Systems - Class 8
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CS 584. A Parallel Programming Model We need abstractions to make it simple. The programming model needs to fit our parallel machine model. Abstractions.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Concurrency, Threads, and Events Robbert van Renesse.
1 Concurrency: Deadlock and Starvation Chapter 6.
Asynchronous Message Passing EE 524/CS 561 Wanliang Ma 03/08/2000.
1 Chapter 4 Threads Threads: Resource ownership and execution.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Concurrency - 1 Tasking Concurrent Programming Declaration, creation, activation, termination Synchronization and communication Time and delays conditional.
Concurrency: Deadlock and Starvation Chapter 6. Goal and approach Deadlock and starvation Underlying principles Solutions? –Prevention –Detection –Avoidance.
1 Concurrency: Deadlock and Starvation Chapter 6.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Inter-process Communication and Coordination Chaitanya Sambhara CSC 8320 Advanced Operating Systems.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
1 Processes, Threads, Race Conditions & Deadlocks Operating Systems Review.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Middleware Services. Functions of Middleware Encapsulation Protection Concurrent processing Communication Scheduling.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Comunication&Synchronization threads 1 Programación Concurrente Benemérita Universidad Autónoma de Puebla Facultad de Ciencias de la Computación Comunicación.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
Agenda  Quick Review  Finish Introduction  Java Threads.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 3: Process Concept
Background on the need for Synchronization
Outline Other synchronization primitives
Other Important Synchronization Primitives
Concurrency: Mutual Exclusion and Synchronization
Threading And Parallel Programming Constructs
Concurrency: Mutual Exclusion and Process Synchronization
Subject : T0152 – Programming Language Concept
Monitors Chapter 7.
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 2019
Presentation transcript:

Concurrency CS 510: Programming Languages David Walker

Concurrent PL Many modern applications are structured as a collection of concurrent, cooperating components Different parts of the concurrent program may be run in parallel, resulting in a performance improvement Performance is only one reason for writing concurrent programs Concurrency is also a useful structuring device for programs  A thread encapsulates a unit of state and control

Concurrent Applications Interactive systems user interfaces, window managers, Microsoft PowerPoint, etc. Reactive systems the program responds to the environment signal processors, dataflow networks, node programs Characteristics multiple input streams, low latency is crucial, multiple distinct tasks operate simultaneously

Concurrent PL Concurrent PL incorporate three kinds of mechanisms: A means to introduce new threads  a thread = a unit of state + control  either static or dynamic thread creation Synchronization primitives  coordinates independent threads  reduces nondeterminism in the order of computations A communication mechanism  shared memory or message-passing

Threads A thread = state + control control = an instruction stream (program counter) state = local variables, stack, possibly local heap Static thread creation p1 | p2 |... | pn creates n threads at runtime Dynamic thread creation spawn/fork create arbitrarily many threads

Interference & Synchronization Multiple threads normally share some information As soon as there is any sharing, the order of execution of multiple threads can alter the meaning of programs Synchronization required to avoid interference let val x = ref 0 in (x := !x + 1) | (x := !x + 2); print (!x) end

Interference & Synchronization To prove sequential programs correct we need to worry about: whether they terminate what output they produce To prove concurrent programs correct we need to worry about: proving the sequential parts correct whether concurrent parts are properly synchronized so they make progress  no deadlock, no livelock, no starvation

Interference & Synchronization A program is deadlocked if every thread requires some resource (state) held by another thread so no thread can make progress  threads are too greedy and hoard resources livelocked if every thread consistently gives up its resources, and so none make progress  no, you first; no, you first; NO! You first;... A thread is starved if it never acquires the resources it needs

Interference & Synchronization Characterizing program properties Safety:  A program never enters a bad state  type safety properties  absence of deadlock & mutual exclusion Liveness  Eventually, a program enters a good state  termination properties  fairness properties (absence of starvation) Proper synchronization necessary for safety and liveness

Communication Mechanisms Shared-memory languages threads interact through shared state type ‘a buffer val buffer : unit -> ‘a buffer val insert : (‘a * ‘a buffer) -> unit val remove : ‘a buffer -> ‘a producer buffer consumer interface:

Communication Mechanisms Synchronization and shared-memory Mutual exclusion locks  Before accessing shared data, the lock for that data must be acquired  When access is complete, lock is released  Modula-3, Java,...  A semaphore is basically a fancy lock Monitors  A module that encapsulates shared state  Only one thread can be active inside the monitor at a time  Pascal (?), Turing

Communication Mechanisms Message-passing threads interact by sending and receiving messages Causality principle: send occurs before receive  Result: synchronization occurs through message passing send receive thread 1thread 2

Communication Mechanisms Communication channels The sender must know where to send the message The receiver must know where to listen for the message A channel encapsulates the source and destination of messages  one-to-one (unicast channels)  one-to-many (broadcast channels)  typed channels  send-only (for output devices), read-only channels (for input devices)

Communication Mechanisms Synchronous (blocking) operations sending thread waits until receiver receives the message send receive thread 1thread 2 resume execution

Communication Mechanisms Synchronous (blocking) operations receiver may also block until it receives the message send receive thread 1thread 2 resume execution resume execution

Communication Mechanisms Asynchronous (nonblocking) operations asynchronous send: sender continues execution immediately after sending message send receive thread 1thread 2 resume execution

Communication Mechanisms Asynchronous (nonblocking) operations One disadvantage of asynchronous send is that we need a message buffer to hold outgoing messages that have not yet been received  When the buffer fills, the send becomes blocking Receive is (almost) never asynchronous  normally, you need the data you are receiving to proceed with the computation

Communication Mechanisms Remote Procedure Call client/server model: RPC involves two message sends, one to request service and one to return result client server f (x) server data rpc(f,x,server)

Concurrent ML Extension to ML with concurrency primitives Thread creation  dynamic thread creation through spawn Synchronization mechanisms  a variety of different sorts of “events” Communication mechanisms  asynchronous and synchronous operations  shared mutable state