1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.

Slides:



Advertisements
Similar presentations
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Advertisements

Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 5 Processes and Threads Copyright © 2008.
1 Concurrent and Distributed Systems Introduction 8 lectures on concurrency control in centralised systems - interaction of components in main memory -
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Synchronization in Java Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
Comparative Programming Languages hussein suleman uct csc304s 2003.
Real-Time Software Design Yonsei University 2 nd Semester, 2014 Sanghyun Park.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Copyright © 2009 Elsevier Chapter 12 :: Concurrency Programming Language Pragmatics Michael L. Scott.
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
Copyright © 2005 Elsevier Chapter 12 :: Concurrency Programming Language Pragmatics Michael L. Scott.
Copyright © 1997 – 2014 Curt Hill Concurrent Execution of Programs An Overview.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Synchronization and semaphores Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Programming Language Principles Lecture 29 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Concurrent Programming.
Copyright © 2009 Elsevier Chapter 12 :: Concurrency Programming Language Pragmatics Michael L. Scott.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Background on the need for Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
Computer Engg, IIT(BHU)
CS 3304 Comparative Languages
Midterm review: closed book multiple choice chapters 1 to 9
Shared Memory Programming
Background and Motivation
Concurrency: Mutual Exclusion and Process Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CSE 542: Operating Systems
Synchronization and semaphores
Presentation transcript:

1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann (stored program) model of computing has single thread of control. Parallel programs have more than one. A process can be thought of as an abstraction of a physical PROCESSOR. Processes/Threads can come from  multiple CPUs  kernel-level multiplexing of single physical machine  language or library level multiplexing of kernel-level abstraction

2 Organization of Programming Languages-Cheng (Fall 2004) Concurrent Processes u They can run in true parallel unpredictably interleaved run-until-block u Most work focuses on the first two cases, which are equally difficult to deal with. u Common scenario: the operating system multiplexes one or more processes on top of one or more physical processors, and a library package or language run-time system multiplexes one or more threads on top of one or more OS processes.

3 Organization of Programming Languages-Cheng (Fall 2004) Classes of Programming Notation u Two main classes of programming notation 1. synchronized access to shared memory 2. message passing between processes that don't share memory u Both approaches can be implemented on hardware designed for the other, u Although shared memory on message-passing hardware tends to be slow. u We'll focus here on shared memory. The book covers both.

4 Organization of Programming Languages-Cheng (Fall 2004) Process creation syntax u static set u co-begin: Algol 68, Occam, SR u parallel loops iterations are independent: SR, Occam, others iterations are to run (as if) in lock step: F95 u launch-on-elaboration: Ada, SR u fork (join?): Ada, Modula-3, Java u implicit receipt: DP, Lynx, RPC systems u early reply: SR, Lynx Cf. separated new() and start() in Java

5 Organization of Programming Languages-Cheng (Fall 2004) Race Conditions 1. A race condition occurs when actions in two processes are not synchronized 2. And program behavior depends on the order in which the actions happen. Leads to unexpected/unwanted behavior. Example: 2 processes can access shared variable in Electronic Controlled Steering (e.g., torque sensors). u amount of assisted steering is dependent upon the value of the variable u What might happen?

6 Organization of Programming Languages-Cheng (Fall 2004) Synchronization u SYNCHRONIZATION: act of ensuring that events in different processes happen in a desired order. u Synchronization can be used to eliminate race conditions. u Most synchronization can be regarded as either: MUTUAL EXCLUSION: ensuring that only one process is executing a CRITICAL SECTION at a time)  E.g.: touching a variable CONDITION SYNCHRONIZATION: ensuring that a given process does not proceed until some condition holds (e.g. that a variable contains a given value).

7 Organization of Programming Languages-Cheng (Fall 2004) Example: index: 1..SIZE buf: array [index] of data nextfree, nextfull : index procedure insert(d : data) % put something into the buffer, wait if it's full procedure remove : data %take something out of the buffer, wait if it's empty A solution requires 1. Only one process manipulate the buffer (or at least any given slot of the buffer) at a time.  Mutual Exclusion solution 2. Processes wait for non-full or non-empty conditions as appropriate. u Condition Synchronization

8 Organization of Programming Languages-Cheng (Fall 2004) Common Concurrency Concepts u Atomic actions: happens all at once u needed to implement synchronization u Ex: reads, writes of individual memory locations u Spinning or Busy-waiting: u Repeatedly reading a shared location until it reaches a certain value u Spin lock: busy-wait mutual exclusion mechanism u Spin lock is better than put processor to sleep if expected spin time is less than rescheduling overhead.

9 Organization of Programming Languages-Cheng (Fall 2004) Schedulers u Give us the ability to "put a thread/process to sleep" and run something else on its process/processor. Start with coroutines make uniprocessor run-until-block threads add preemption add multiple processors

10 Organization of Programming Languages-Cheng (Fall 2004) Scheduler u Coroutine: multiple execution contexts, only one of which is active u Preemption: Use timer interrupts (in OS) or signals (in library package) to trigger involuntary yields. u See book for examples of how to implement scheduler

11 Organization of Programming Languages-Cheng (Fall 2004) Alternative to Shared Memory u Distributed memory u Use message passing to communicate changes to memory

12 Organization of Programming Languages-Cheng (Fall 2004) Architectures for Concurrency u Multiprocessors (HPC) Shared memory model Efficiency in operations Less flexible u Clusters of workstations (NOWs) Distributed memory Overhead for communication Flexibility in configuration