David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 22: Abstractions for Concurrency.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

– R 7 :: 1 – 0024 Spring 2010 Parallel Programming 0024 Recitation Week 7 Spring Semester 2010.
Ch 7 B.
CSCC69: Operating Systems
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
JavaSpaces and TSpaces Theresa Tamash CDA 5937 November 4, 2002.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
1 Semaphores and Monitors: High-level Synchronization Constructs.
1 Synchronization Coordinating the Activity of Mostly Independent Entities.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Programming Language Semantics Java Threads and Locks Informal Introduction The Java Specification Language Chapter 17.
Tuple Spaces and JavaSpaces CS 614 Bill McCloskey.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
Monitor  Giving credit where it is due:  The lecture notes are borrowed from Dr. I-Ling Yen at University of Texas at Dallas  I have modified them and.
Comparative Programming Languages hussein suleman uct csc304s 2003.
CS510 Concurrent Systems Introduction to Concurrency.
Object Oriented Programming Lecture 8: Introduction to laboratorial exercise – part II, Introduction to GUI frames in Netbeans, Introduction to threads.
1 Concurrent Languages – Part 1 COMP 640 Programming Languages.
L. Grewe.  An array ◦ stores several elements of the same type ◦ can be thought of as a list of elements: int a[8]
1 Processes, Threads, Race Conditions & Deadlocks Operating Systems Review.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Java Thread and Memory Model
David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 21: Proof-Carrying Code and.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
CS533 Concepts of Operating Systems Class 3
Chapter 5: Process Synchronization
Concurrency without Locks
Programming with Shared Memory
Monitors Chapter 7.
Critical section problem
Monitors Chapter 7.
Subject : T0152 – Programming Language Concept
CS533 Concepts of Operating Systems Class 3
Monitors Chapter 7.
Monitor Giving credit where it is due:
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
David Evans Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet.
CSE 451 Section 1/27/2000.
Presentation transcript:

David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 22: Abstractions for Concurrency When you have a world-wide tuple space, you’ll be able to tune it in from any computer anywhere – or from any quasi-computer: any cell phone, any TV, any toaster. David Gelernter’s introduction to JavaSpaces Principles, Patterns, and Practice.

17 April 2001CS 655: Lecture 212 Menu Form going around –Signup for Project Presentations –Vote for Next Lecture (no cheating!) Abstractions for Concurrency –Algol 68 –Monitors –Linda and JavaSpaces

17 April 2001CS 655: Lecture 213 Last Time Concurrent programming is programming with partial ordering on time A concurrent programming language gives programmers mechanisms for expressing that partial order We can express many partial orders using the thread control primitives fork and join and locking primitives protect, acquire and release.

17 April 2001CS 655: Lecture 214 Abstractions Programming at that low level would be a pain – are there better abstractions? –Hundreds of attempts...we’ll see a few today. Issues –Thread creation –Thread synchronization –Resource contention fork join protect, acquire, release

17 April 2001CS 655: Lecture 215 Algol 68: Collateral Clauses Collateral Clauses stmt0; (stmt1, stmt2) stmt3; Defines a partial order: stmt0 stmt1stmt2 stmt3

17 April 2001CS 655: Lecture 216 Algol 68: Semaphores Dijkstra, “Cooperating Sequential Processes” type sema up – increments down – decrements (must > 0 before)

17 April 2001CS 655: Lecture 217 Semaphore Example begin sema mutex := level 1; proc producer while not finished do down mutex... insert item up mutex od;

17 April 2001CS 655: Lecture 218 Semaphore Example, cont. proc consumer while not finished do down mutex... remove item up mutex od; par (producer, consumer) // start them in parallel

17 April 2001CS 655: Lecture 219 What can go wrong? Programmer neglects to up semaphore Programmer neglects to down semaphore before accessing shared resource Programmer spends all her time worrying about up and down instead of the algorithm

17 April 2001CS 655: Lecture 2110 Monitors Concurrent Pascal [Hansen 74], Modula [Wirth 77], Mesa [Lampson80] Integrated data abstraction and resource synchronization Routines that use a shared resource grouped in a monitor, accesses only allowed through exported procedures Conditions can control when threads may execute exported procedures

17 April 2001CS 655: Lecture 2111 Monitor Example monitor boundedbuffer buffer: array 0..N-1 of int; count: 0..N; nonempty, nonfull: condition; procedure append (x: int) if count = N then nonfull.wait; buffer[count] := x; count := count + 1; nonempty.signal end append Example adapted from [Hansen93]

17 April 2001CS 655: Lecture 2112 Monitor Example, cont. procedure remove () returns portion if count = 0 then nonempty.wait; x :=... nonfull.signal end remove;

17 April 2001CS 655: Lecture 2113 Java Synchronization synchronized method qualifier –Once a synchronized method begins execution, it will complete before any other thread enters a method of the same object –Run-time must associate a lock with every object Is this enough to implement a semaphore? Is this better/worse than monitors?

17 April 2001CS 655: Lecture 2114 Synchronized Example class ProducerConsumer { private int x = 1; synchronized void produce () { x = x + 1; } synchronized void consume () { x = x – 1; } } How could we require x stay positive?

17 April 2001CS 655: Lecture 2115 Linda Program Concurrency by using uncoupled processes with shared data space Add concurrency into a sequential language by adding: –Simple operators –Runtime kernel (language-independent) –Preprocessor (or compiler)

17 April 2001CS 655: Lecture 2116 Design by Taking Away Backus: von Neumann bottleneck results from having a store –Remove the store  Functional Languages Gelernter: distributed programming is hard because of inter-process scheduling and communication due to order of mutation –We don’t have to remove the store, just mutation –Remove mutation  read-and-remove only store  tuple spaces

17 April 2001CS 655: Lecture 2117 Basic Idea Have a shared space (“tuple space”) –Processes can add, read, and take away values from this space Bag of processes, each looks for work it can do by matching values in the tuple space Get load balancing, synchronization, messaging, etc. for free!

17 April 2001CS 655: Lecture 2118 Tuples Conventional Memory Linda/JavaSpaces UnitBitLogical Tuple (23, “test”, false) Access UsingAddress (variable)Selection of values Operationsread, writeread, add, remove JavaSpaces: read, write, take immutable

17 April 2001CS 655: Lecture 2119 Tuple Space Operations out (t) – add tuple t to tuple space take (s)  t – returns and removes tuple t matching template s read (s)  t – same as in, except doesn’t remove t. Operations are atomic (even if space is distributed)

17 April 2001CS 655: Lecture 2120 Meaning of take take (“f”, int n) take (“f”, 23) take (“t”, bool b, int n) take (string s, int n) take (“cookie”) (“f”, 23) (“f”, 17) (“t”, 25) (“t”, true) (“t”, false) Tuple Space

17 April 2001CS 655: Lecture 2121 Operational Semantics Extend configurations with a tuple space (just a bag of tuples) Transition rule for out: –Just add an entry to the tuple space Transition rule for take: –If there is a match (ignoring binding): Remove it from the tuple space Advance the thread –Similar to join last time – it just waits if there is no match

17 April 2001CS 655: Lecture 2122 Shared Assignment Loc := Expression take (“Loc”, formal loc_value); out (“Loc”, Expression); e.g.: x := x + 1;  take (“x”, formal x_value) out (“x”, x_value + 1);

17 April 2001CS 655: Lecture 2123 Semaphore Create (int n, String resource) for (i = 0; i < n; i++) out (resource); Down (String resource) take (resource) Up (String resource) out (resource)

17 April 2001CS 655: Lecture 2124 Distributed Ebay Offer Item (String item, int minbid, int time): out (item, minbid, “owner”); sleep (time); take (item, formal bid, formal bidder); if (bidder  “owner”) SOLD! Bid (String bidder, String item, int bid): take (item, formal highbid, formal highbidder); if (bid > highbid) out (item, bid, bidder) else out (item, highbid, highbidder) How could a bidder cheat?

17 April 2001CS 655: Lecture 2125 Factorial Setup: for (int i = 1; i <= n; i++) out (i); start FactTask (replicated n-1 times) FactTask: take (int i); take (int j); out (i * j); Eventually, tuple space contains one entry which is the answer. Better way to order Setup? What if last two elements are taken concurrently?

17 April 2001CS 655: Lecture 2126 Finishing Factorial Setup: for (int i = 1; i <= n; i++) out (i); out (“workleft”, n - 1); take (“workleft”, 0); take (result); FactTask: take (“workleft”, formal w); if (w > 0) take (int i); take (int j); out (i * j); out (“workleft”, w – 1); endif; Opps – we’ve sequentialized it!

17 April 2001CS 655: Lecture 2127 Concurrent Finishing Factorial Setup: start FactWorker (replicated n-1 times) out (“done”, 0); for (int i = 1; i <= n; i++) { out (i); if i > 1 out (“work”); } take (“done”, n-1); take (result); FactWorker: take (“work”); take (formal int i); take (formal int j); out (i * j); take (“done”, formal int n); out (“done”, n + 1);

17 April 2001CS 655: Lecture 2128 Sorting in Linda Problem: Sorting an array of n integers Initial tuple state: (“A”, [A[0],..., A[n-1]]) Final tuple state: (“A”, [A’[0],..., A’[n-1]]) such A’ has a corresponding element for every element in A, and for all 0 <= j < k <= n-1, A’[j] <= A’[k]. Task: Devise a Linda sorting program and analyze its performance (can you match MergeSort?)

17 April 2001CS 655: Lecture 2129 Summary Linda/JavaSpaces provides a simple, but powerful model for distributed computing JavaSpaces extends Linda with: –Leases (tuples that expire after a time limit) Implementing an efficient, scalable tuple space (that provides the correct global semantics) is hard; people have designed custom hardware to do this.

17 April 2001CS 655: Lecture 2130 Charge You can download JavaSpaces implementation from: Project presentations for next week – advice for them Thursday Projects are due 2 weeks from today