ConcurrencyCS-502 Fall 20071 Concurrency CS-502 Operating Systems Fall 2007 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz,

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Operating System Concepts and Techniques Lecture 12 Interprocess communication-1 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Introduction to Operating Systems CS-2301 B-term Introduction to Operating Systems CS-2301, System Programming for Non-majors (Slides include materials.
6/9/2015B.Ramamurthy1 Process Description and Control B.Ramamurthy.
Process Description and Control
OS Fall ’ 02 Introduction Operating Systems Fall 2002.
Page 1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Introduction to Synchronization CS-3013 C-term Introduction to Synchronization CS-3013 Operating Systems (Slides include materials from Operating.
Processes 1 CS502 Spring 2006 Processes Week 2 – CS 502.
OS Spring’03 Introduction Operating Systems Spring 2003.
Process in Unix, Linux and Windows CS-3013 C-term Processes in Unix, Linux, and Windows CS-3013 Operating Systems (Slides include materials from.
Introduction to Synchronization CS-3013 A-term Introduction to Synchronization CS-3013 Operating Systems (Slides include materials from Modern Operating.
Introduction to Synchronization CS-3013 A-term Introduction to Synchronization CS-3013, Operating Systems A-term 2009 (Slides include materials from.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
ConcurrencyCS-3013 A-term Introduction to Concurrency ( Processes, Threads, Interrupts, etc.) CS-3013, Operating Systems A-term 2009 (Slides include.
Processes in Unix, Linux, and Windows CS-502 Fall Processes in Unix, Linux, and Windows CS502 Operating Systems (Slides include materials from Operating.
Concurrency 1 CS502 Spring 2006 Thought experiment static int y = 0; int main(int argc, char **argv) { extern int y; y = y + 1; return 0; }
Concurrency & Synchronization CS-502 Fall Concurrency (continued) and Synchronization CS-502 Operating Systems Fall 2007 (Slides include materials.
Process in Unix, Linux, and Windows CS-3013 A-term Processes in Unix, Linux, and Windows CS-3013 Operating Systems (Slides include materials from.
1 Computer System Overview Chapter 1. 2 n An Operating System makes the computing power available to users by controlling the hardware n Let us review.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Chapter 41 Processes Chapter 4. 2 Processes  Multiprogramming operating systems are built around the concept of process (also called task).  A process.
Chapter 3 Process Description and Control
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Operating Systems 1 K. Salah Module 1.2: Fundamental Concepts Interrupts System Calls.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Introduction to Concurrency CS-502 (EMC) Fall Introduction to Concurrency ( Processes, Threads, Interrupts, etc.) CS-502, Operating Systems Fall.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
Copyright © Curt Hill More on Operating Systems Continuation of Introduction.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Overview: Using Hardware.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Limited Direct Execution Chien-Chung Shen CIS/UD
CS 3013 & CS 502 Summer 2006 Concurrency & Processes1 CS3013 & CS502 Operating Systems.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CS703 – Advanced Operating Systems
Protection of System Resources
Processes in Unix, Linux, and Windows
Processes in Unix, Linux, and Windows
Concurrency CS-3013 Operating Systems C-Term 2008 Assumptions:
Processes in Unix, Linux, and Windows
Process Description and Control
Process Description and Control
Concurrency: Mutual Exclusion and Process Synchronization
Introduction to Synchronization
Introduction to Synchronization
Process Description and Control
Process Description and Control
Processes in Unix, Linux, and Windows
Chapter 2 Processes and Threads 2.1 Processes 2.2 Threads
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Concurrency & Processes
Presentation transcript:

ConcurrencyCS-502 Fall Concurrency CS-502 Operating Systems Fall 2007 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz, Galvin, & Gagne and from Modern Operating Systems, 2 nd ed., by Tanenbaum)

ConcurrencyCS-502 Fall Concurrency Since the beginning of computing, management of concurrent activity has been the central issue Concurrency between computation and input or output Concurrency between computation and user Concurrency between essentially independent computations that take place at same time Concurrency between parts of large computations that are divided up to improve performance …

ConcurrencyCS-502 Fall Example – 1960s Programmers tried to write programs that would read from input devices and write to output devices in parallel with computing Card readers, paper tape, line printers, etc. Challenges Keeping the buffers straight Synchronizing between read activity and computing Computing getting ahead of I/O activity I/O activity getting ahead of computing

ConcurrencyCS-502 Fall Shared Computing Services Multiple simultaneous, independent users of large computing facilities E.g., Time Sharing systems of university computing centers Data centers of large enterprises Multiple accounting, administrative, and data processing activities over common databases …

ConcurrencyCS-502 Fall Modern PCs and Workstations Multiple windows in personal computer doing completely independent things Word, Excel, Photoshop, , etc. Multiple activities within one application –E.g., in Microsoft Word Reading and interpreting keystrokes Formatting line and page breaks Displaying what you typed Spell checking Hyphenation ….

ConcurrencyCS-502 Fall Modern Game Implementations Multiple characters in game Concurrently & independently active Multiple constraints, challenges, and interactions among characters Multiple players

ConcurrencyCS-502 Fall Technological Pressure From early 1950s to early 2000s, single processor computers increased in speed by 2  every 18 months or so Moore’s Law Multiprocessing was somewhat of a niche problem Specialized computing centers, techniques

ConcurrencyCS-502 Fall Technological Pressure (continued) No longer! Modern microprocessor clock speeds are no longer increasing with Moore’s Law Microprocessor density on chips still is!  multi-core processors are now de facto standard Even on low-end PCs!

ConcurrencyCS-502 Fall Traditional Challenge for OS Useful set of abstractions that help to –Manage concurrency –Manage synchronization among concurrent activities –Communicate information in useful way among concurrent activities –Do it all efficiently

ConcurrencyCS-502 Fall Modern Challenge Methods and abstractions to help software engineers and application designers –Take advantage of inherent concurrency in modern systems –Do so with relative ease

ConcurrencyCS-502 Fall Fundamental Abstraction Process … aka Task …aka Thread …aka Task …aka [other terms]

ConcurrencyCS-502 Fall Process (a generic term) Definition: A particular execution of a program. Requires time, space, and (perhaps) other resources Separate from all other executions of the same program Even those at the same time! Separate from executions of other programs

ConcurrencyCS-502 Fall Process (a generic term – continued) … Can be Interrupted Suspended Blocked Unblocked Started or continued Fundamental abstraction of all modern operating systems Also known as thread (of control), task, job, etc. Note: a Unix/Windows “Process” is a heavyweight concept with more implications than this simple definition

ConcurrencyCS-502 Fall Process (a generic term – continued) Concept emerged in 1960s Intended to make sense out of mish-mash of difficult programming techniques that bedeviled software engineers Analogous to police or taxi dispatcher!

ConcurrencyCS-502 Fall Background – Interrupts A mechanism in (nearly) all computers by which a running program can be suspended in order to cause processor to do something else Two kinds:– Traps – synchronous, caused by running program –Deliberate: e.g., system call –Error: divide by zero Interrupts – asynchronous, spawned by some other concurrent activity or device. Essential to the usefulness of computing systems

ConcurrencyCS-502 Fall Hardware Interrupt Mechanism Upon receipt of electronic signal, the processor Saves current PSW to a fixed location Loads new PSW from another fixed location PSW — Program Status Word Program counter Condition code bits (comparison results) Interrupt enable/disable bits Other control and mode information –E.g., privilege level, access to special instructions, etc. Occurs between machine instructions An abstraction in modern processors (see Silbershatz, §1.2.1 and §13.2.2)

ConcurrencyCS-502 Fall Interrupt Handler /* Enter with interrupts disabled */ Save registers of interrupted computation Load registers needed by handler Examine cause of interrupt Take appropriate action (brief) Reload registers of interrupted computation Reload interrupted PSW and re-enable interrupts or Load registers of another computation Load its PSW and re-enable interrupts

ConcurrencyCS-502 Fall Requirements of interrupt handlers Fast Avoid possibilities of interminable waits Must not count on correctness of interrupted computation Must not get confused by multiple interrupts in close succession … More challenging on multiprocessor systems

ConcurrencyCS-502 Fall Result Interrupts make it possible to have concurrent activities at same time Don’t help in establishing some kind of orderly way of thinking Need something more

ConcurrencyCS-502 Fall Hence, emergence of generic concept of process –(or whatever it is called in a particular operating system and environment)

ConcurrencyCS-502 Fall Information the system needs to maintain about a typical process PSW (program status word) Program counter Condition codes Control information – e.g., privilege level, priority, etc Registers, stack pointer, etc. Whatever hardware resources needed to compute Administrative information for OS Owner, restrictions, resources, etc. Other stuff …

ConcurrencyCS-502 Fall Process Control Block (PCB) (example data structure in an OS)

ConcurrencyCS-502 Fall Switching from process to process

ConcurrencyCS-502 Fall Result A very clean way of thinking about separate computations Processes can appear be executing in parallel Even on a single processor machine Processes really can execute in parallel Multiprocessor or multi-core machine …

ConcurrencyCS-502 Fall Process States

ConcurrencyCS-502 Fall The Fundamental Abstraction Each process has its “virtual” CPU Each process can be thought of as an independent computation On a fast enough CPU, processes can look like they are really running concurrently

ConcurrencyCS-502 Fall Questions?

ConcurrencyCS-502 Fall Challenge How can we help processes synchronize with each other? E.g., how does one process “know” that another has completed a particular action? E.g, how do separate processes “keep out of each others’ way” with respect to some shared resource

ConcurrencyCS-502 Fall Digression (thought experiment) static int y = 0; int main(int argc, char **argv) { extern int y; y = y + 1; return y; }

ConcurrencyCS-502 Fall Digression (thought experiment) static int y = 0; int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Upon completion of main, y == 1

ConcurrencyCS-502 Fall Process 1 int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Process 2 int main2(int argc, char **argv) { extern int y; y = y - 1; return y; } Assuming processes run “at the same time,” what are possible values of y after both terminate? Thought experiment (continued) static int y = 0;

ConcurrencyCS-502 Fall Another Abstraction – Atomic Operation An operation that either happens entirely or not at all No partial result is visible or apparent Non-interruptible If two atomic operations happen “at the same time” Effect is as if one is first and the other is second (Usually) don’t know which is which

ConcurrencyCS-502 Fall Hardware Atomic Operations On (nearly) all computers, reading and writing operations of machine words can be considered as atomic Non-interruptible It either happens or it doesn’t Not in conflict with any other operation When two attempts to read or write the same data, one is first and the other is second Don’t know which is which! No other guarantees (unless we take extraordinary measures)

ConcurrencyCS-502 Fall Definitions Definition: race condition When two or more concurrent activities are trying to do something with the same variable resulting in different values Random outcome Critical Region (aka critical section) One or more fragments of code that operate on the same data, such that at most one activity at a time may be permitted to execute anywhere in that set of fragments.

ConcurrencyCS-502 Fall Synchronization – Critical Regions

ConcurrencyCS-502 Fall Class Discussion How do we keep multiple computations from being in a critical region at the same time? Especially when number of computations is > 2 Remembering that read and write operations are atomic example

ConcurrencyCS-502 Fall Possible ways to protect critical section Without OS assistance — Locking variables & busy waiting –Peterson’s solution (p ) –Atomic read-modify-write – e.g. Test & Set With OS assistance — abstract synchronization operations Single processor Multiple processors

ConcurrencyCS-502 Fall Requirements – Controlling Access to a Critical Section –Only one computation in critical section at a time –Symmetrical among n computations –No assumption about relative speeds –A stoppage outside critical section does not lead to potential blocking of others –No starvation — i.e. no combination of timings that could cause a computation to wait forever to enter its critical section

ConcurrencyCS-502 Fall Non-solution static int turn = 0; Process 1 while (TRUE) { while (turn !=0) /*loop*/; critical_region(); turn = 1; noncritical_region1() ; }; Process 2 while (TRUE) { while (turn !=1) /*loop*/; critical_region(); turn = 0; noncritical_region2() ; };

ConcurrencyCS-502 Fall Non-solution static int turn = 0; Process 1 while (TRUE) { while (turn !=0) /*loop*/; critical_region(); turn = 1; noncritical_region1() ; }; Process 2 while (TRUE) { while (turn !=1) /*loop*/; critical_region(); turn = 0; noncritical_region2() ; }; What is wrong with this approach?

ConcurrencyCS-502 Fall Peterson’s solution (2 processes) static int turn = 0; static int interested[2]; void enter_region(int process) { int other = 1 - process; interested[process] = TRUE; turn = process; while (turn == process && interested[other] == TRUE) /*loop*/; }; void leave_region(int process) { interested[process] = FALSE; };

ConcurrencyCS-502 Fall Another approach: Test & Set Instruction (built into CPU hardware) static int lock = 0; extern int TestAndSet(int &i); /* sets the value of i to 1 and returns the previous value of i. */ void enter_region(int &lock) { while (TestAndSet(lock) == 1) /* loop */ ; }; void leave_region(int &lock) { lock = 0; };

ConcurrencyCS-502 Fall Test & Set Instruction (built into CPU hardware) static int lock = 0; extern int TestAndSet(int &i); /* sets the value of i to 1 and returns the previous value of i. */ void enter_region(int &lock) { while (TestAndSet(lock) == 1) /* loop */ ; }; void leave_region(int &lock) { lock = 0; }; What about this solution?

ConcurrencyCS-502 Fall Variations Compare & Swap (a, b) temp = b b = a a = temp return(a == b) … A whole mathematical theory about efficacy of these operations All require extraordinary circuitry in processor memory, and bus to implement atomically

ConcurrencyCS-502 Fall With OS assistance Implement an abstraction:– –A data type called semaphore Non-negative integer values. –An operation wait_s(semaphore &s) such that if s > 0, atomically decrement s and proceed. if s = 0, block the computation until some other computation executes post_s(s). –An operation post_s(semaphore &s):– Atomically increment s; if one or more computations are blocked on s, allow precisely one of them to unblock and proceed.

ConcurrencyCS-502 Fall Critical Section control with Semaphore static semaphore mutex = 1; Process 1 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region1(); }; Process 2 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region2(); };

ConcurrencyCS-502 Fall Critical Section control with Semaphore static semaphore mutex = 1; Process 1 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region1(); }; Process 2 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region2(); }; Does this meet the requirements for controlling access to critical sections?

ConcurrencyCS-502 Fall Semaphores – History Introduced by E. Dijkstra in wait_s() was called P() Initial letter of a Dutch word meaning “test” post_s() was called V() Initial letter of a Dutch word meaning “increase”

ConcurrencyCS-502 Fall Abstractions The semaphore is an example of a new and powerful abstraction defined by OS I.e., a data type and some operations that add a capability that was not in the underlying hardware or system. Any program can use this abstraction to control critical sections and to create more powerful forms of synchronization among computations.

ConcurrencyCS-502 Fall Process and semaphore data structures class State { long int PSW; long int regs[R]; /*other stuff*/ } class PCB { PCB *next, prev, queue; State s; PCB (…); /*constructor*/ ~PCB(); /*destructor*/ } class Semaphore { int count; PCB *queue; friend wait_s(Semaphore &s); friend post_s(Semaphore &s); Semaphore(int initial); /*constructor*/ ~Semaphore(); /*destructor*/ }

ConcurrencyCS-502 Fall Implementation Ready queue PCB Semaphore A count = 0 PCB Semaphore B count = 2

ConcurrencyCS-502 Fall Implementation Action – dispatch a process to CPU Remove first PCB from ready queue Load registers and PSW Return from interrupt or trap Action – interrupt a process Save PSW and registers in PCB If not blocked, insert PCB into ReadyQueue (in some order) Take appropriate action Dispatch same or another process from ReadyQueue Ready queue PCB

ConcurrencyCS-502 Fall Implementation – Semaphore actions Action – wait_s(Semaphore &s) Implement as a Trap (with interrupts disabled) if (s.count == 0) –Save registers and PSW in PCB –Queue PCB on s.queue –Dispatch next process on ReadyQueue else –s.count = s.count – 1; –Re-dispatch current process Ready queue PCB Event wait

ConcurrencyCS-502 Fall Implementation – Semaphore actions Action – post_s(Semaphore &s) Implement as a Trap (with interrupts disabled) if (s.queue != null) –Save current process in ReadyQueue –Move first process on s.queue to ReadyQueue –Dispatch some process on ReadyQueue else –s.count = s.count + 1; –Re-dispatch current process Ready queue PCB Semaphore A count = 0 PCB Event completion

ConcurrencyCS-502 Fall Interrupt Handling (Quickly) analyze reason for interrupt Execute equivalent post_s to appropriate semaphore as necessary Implemented in device-specific routines Critical sections Buffers and variables shared with device managers More about interrupt handling later in the course

ConcurrencyCS-502 Fall Timer interrupts Can be used to enforce “fair sharing” Current process goes back to ReadyQueue After other processes of equal or higher priority Simulates concurrent execution of multiple processes on same processor

ConcurrencyCS-502 Fall Definition – Context Switch The act of switching from one process to another E.g., upon interrupt or some kind of wait for event Not a big deal in simple systems and processors Very big deal in large systems such Linux and Windows Pentium 4 Many microseconds!

ConcurrencyCS-502 Fall Complications for Multiple Processors Disabling interrupts is not sufficient for atomic operations Semaphore operations must themselves be implemented in critical sections Queuing and dequeuing PCB’s must also be implemented in critical sections Other control operations need protection These problems all have solutions but need deeper thought!

ConcurrencyCS-502 Fall Summary Interrupts transparent to processes wait_s() and post_s() behave as if they are atomic Device handlers and all other OS services can be embedded in processes All processes behave as if concurrently executing Fundamental underpinning of all modern operations systems

ConcurrencyCS-502 Fall Questions?