Presentation is loading. Please wait.

Presentation is loading. Please wait.

ConcurrencyCS-502 Fall 20071 Concurrency CS-502 Operating Systems Fall 2007 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz,

Similar presentations


Presentation on theme: "ConcurrencyCS-502 Fall 20071 Concurrency CS-502 Operating Systems Fall 2007 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz,"— Presentation transcript:

1 ConcurrencyCS-502 Fall 20071 Concurrency CS-502 Operating Systems Fall 2007 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz, Galvin, & Gagne and from Modern Operating Systems, 2 nd ed., by Tanenbaum)

2 ConcurrencyCS-502 Fall 20072 Concurrency Since the beginning of computing, management of concurrent activity has been the central issue Concurrency between computation and input or output Concurrency between computation and user Concurrency between essentially independent computations that take place at same time Concurrency between parts of large computations that are divided up to improve performance …

3 ConcurrencyCS-502 Fall 20073 Example – 1960s Programmers tried to write programs that would read from input devices and write to output devices in parallel with computing Card readers, paper tape, line printers, etc. Challenges Keeping the buffers straight Synchronizing between read activity and computing Computing getting ahead of I/O activity I/O activity getting ahead of computing

4 ConcurrencyCS-502 Fall 20074 Shared Computing Services Multiple simultaneous, independent users of large computing facilities E.g., Time Sharing systems of university computing centers Data centers of large enterprises Multiple accounting, administrative, and data processing activities over common databases …

5 ConcurrencyCS-502 Fall 20075 Modern PCs and Workstations Multiple windows in personal computer doing completely independent things Word, Excel, Photoshop, E-mail, etc. Multiple activities within one application –E.g., in Microsoft Word Reading and interpreting keystrokes Formatting line and page breaks Displaying what you typed Spell checking Hyphenation ….

6 ConcurrencyCS-502 Fall 20076 Modern Game Implementations Multiple characters in game Concurrently & independently active Multiple constraints, challenges, and interactions among characters Multiple players

7 ConcurrencyCS-502 Fall 20077 Technological Pressure From early 1950s to early 2000s, single processor computers increased in speed by 2  every 18 months or so Moore’s Law Multiprocessing was somewhat of a niche problem Specialized computing centers, techniques

8 ConcurrencyCS-502 Fall 20078 Technological Pressure (continued) No longer! Modern microprocessor clock speeds are no longer increasing with Moore’s Law Microprocessor density on chips still is!  multi-core processors are now de facto standard Even on low-end PCs!

9 ConcurrencyCS-502 Fall 20079 Traditional Challenge for OS Useful set of abstractions that help to –Manage concurrency –Manage synchronization among concurrent activities –Communicate information in useful way among concurrent activities –Do it all efficiently

10 ConcurrencyCS-502 Fall 200710 Modern Challenge Methods and abstractions to help software engineers and application designers –Take advantage of inherent concurrency in modern systems –Do so with relative ease

11 ConcurrencyCS-502 Fall 200711 Fundamental Abstraction Process … aka Task …aka Thread …aka Task …aka [other terms]

12 ConcurrencyCS-502 Fall 200712 Process (a generic term) Definition: A particular execution of a program. Requires time, space, and (perhaps) other resources Separate from all other executions of the same program Even those at the same time! Separate from executions of other programs

13 ConcurrencyCS-502 Fall 200713 Process (a generic term – continued) … Can be Interrupted Suspended Blocked Unblocked Started or continued Fundamental abstraction of all modern operating systems Also known as thread (of control), task, job, etc. Note: a Unix/Windows “Process” is a heavyweight concept with more implications than this simple definition

14 ConcurrencyCS-502 Fall 200714 Process (a generic term – continued) Concept emerged in 1960s Intended to make sense out of mish-mash of difficult programming techniques that bedeviled software engineers Analogous to police or taxi dispatcher!

15 ConcurrencyCS-502 Fall 200715 Background – Interrupts A mechanism in (nearly) all computers by which a running program can be suspended in order to cause processor to do something else Two kinds:– Traps – synchronous, caused by running program –Deliberate: e.g., system call –Error: divide by zero Interrupts – asynchronous, spawned by some other concurrent activity or device. Essential to the usefulness of computing systems

16 ConcurrencyCS-502 Fall 200716 Hardware Interrupt Mechanism Upon receipt of electronic signal, the processor Saves current PSW to a fixed location Loads new PSW from another fixed location PSW — Program Status Word Program counter Condition code bits (comparison results) Interrupt enable/disable bits Other control and mode information –E.g., privilege level, access to special instructions, etc. Occurs between machine instructions An abstraction in modern processors (see Silbershatz, §1.2.1 and §13.2.2)

17 ConcurrencyCS-502 Fall 200717 Interrupt Handler /* Enter with interrupts disabled */ Save registers of interrupted computation Load registers needed by handler Examine cause of interrupt Take appropriate action (brief) Reload registers of interrupted computation Reload interrupted PSW and re-enable interrupts or Load registers of another computation Load its PSW and re-enable interrupts

18 ConcurrencyCS-502 Fall 200718 Requirements of interrupt handlers Fast Avoid possibilities of interminable waits Must not count on correctness of interrupted computation Must not get confused by multiple interrupts in close succession … More challenging on multiprocessor systems

19 ConcurrencyCS-502 Fall 200719 Result Interrupts make it possible to have concurrent activities at same time Don’t help in establishing some kind of orderly way of thinking Need something more

20 ConcurrencyCS-502 Fall 200720 Hence, emergence of generic concept of process –(or whatever it is called in a particular operating system and environment)

21 ConcurrencyCS-502 Fall 200721 Information the system needs to maintain about a typical process PSW (program status word) Program counter Condition codes Control information – e.g., privilege level, priority, etc Registers, stack pointer, etc. Whatever hardware resources needed to compute Administrative information for OS Owner, restrictions, resources, etc. Other stuff …

22 ConcurrencyCS-502 Fall 200722 Process Control Block (PCB) (example data structure in an OS)

23 ConcurrencyCS-502 Fall 200723 Switching from process to process

24 ConcurrencyCS-502 Fall 200724 Result A very clean way of thinking about separate computations Processes can appear be executing in parallel Even on a single processor machine Processes really can execute in parallel Multiprocessor or multi-core machine …

25 ConcurrencyCS-502 Fall 200725 Process States

26 ConcurrencyCS-502 Fall 200726 The Fundamental Abstraction Each process has its “virtual” CPU Each process can be thought of as an independent computation On a fast enough CPU, processes can look like they are really running concurrently

27 ConcurrencyCS-502 Fall 200727 Questions?

28 ConcurrencyCS-502 Fall 200728 Challenge How can we help processes synchronize with each other? E.g., how does one process “know” that another has completed a particular action? E.g, how do separate processes “keep out of each others’ way” with respect to some shared resource

29 ConcurrencyCS-502 Fall 200729 Digression (thought experiment) static int y = 0; int main(int argc, char **argv) { extern int y; y = y + 1; return y; }

30 ConcurrencyCS-502 Fall 200730 Digression (thought experiment) static int y = 0; int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Upon completion of main, y == 1

31 ConcurrencyCS-502 Fall 200731 Process 1 int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Process 2 int main2(int argc, char **argv) { extern int y; y = y - 1; return y; } Assuming processes run “at the same time,” what are possible values of y after both terminate? Thought experiment (continued) static int y = 0;

32 ConcurrencyCS-502 Fall 200732 Another Abstraction – Atomic Operation An operation that either happens entirely or not at all No partial result is visible or apparent Non-interruptible If two atomic operations happen “at the same time” Effect is as if one is first and the other is second (Usually) don’t know which is which

33 ConcurrencyCS-502 Fall 200733 Hardware Atomic Operations On (nearly) all computers, reading and writing operations of machine words can be considered as atomic Non-interruptible It either happens or it doesn’t Not in conflict with any other operation When two attempts to read or write the same data, one is first and the other is second Don’t know which is which! No other guarantees (unless we take extraordinary measures)

34 ConcurrencyCS-502 Fall 200734 Definitions Definition: race condition When two or more concurrent activities are trying to do something with the same variable resulting in different values Random outcome Critical Region (aka critical section) One or more fragments of code that operate on the same data, such that at most one activity at a time may be permitted to execute anywhere in that set of fragments.

35 ConcurrencyCS-502 Fall 200735 Synchronization – Critical Regions

36 ConcurrencyCS-502 Fall 200736 Class Discussion How do we keep multiple computations from being in a critical region at the same time? Especially when number of computations is > 2 Remembering that read and write operations are atomic example

37 ConcurrencyCS-502 Fall 200737 Possible ways to protect critical section Without OS assistance — Locking variables & busy waiting –Peterson’s solution (p. 195-197) –Atomic read-modify-write – e.g. Test & Set With OS assistance — abstract synchronization operations Single processor Multiple processors

38 ConcurrencyCS-502 Fall 200738 Requirements – Controlling Access to a Critical Section –Only one computation in critical section at a time –Symmetrical among n computations –No assumption about relative speeds –A stoppage outside critical section does not lead to potential blocking of others –No starvation — i.e. no combination of timings that could cause a computation to wait forever to enter its critical section

39 ConcurrencyCS-502 Fall 200739 Non-solution static int turn = 0; Process 1 while (TRUE) { while (turn !=0) /*loop*/; critical_region(); turn = 1; noncritical_region1() ; }; Process 2 while (TRUE) { while (turn !=1) /*loop*/; critical_region(); turn = 0; noncritical_region2() ; };

40 ConcurrencyCS-502 Fall 200740 Non-solution static int turn = 0; Process 1 while (TRUE) { while (turn !=0) /*loop*/; critical_region(); turn = 1; noncritical_region1() ; }; Process 2 while (TRUE) { while (turn !=1) /*loop*/; critical_region(); turn = 0; noncritical_region2() ; }; What is wrong with this approach?

41 ConcurrencyCS-502 Fall 200741 Peterson’s solution (2 processes) static int turn = 0; static int interested[2]; void enter_region(int process) { int other = 1 - process; interested[process] = TRUE; turn = process; while (turn == process && interested[other] == TRUE) /*loop*/; }; void leave_region(int process) { interested[process] = FALSE; };

42 ConcurrencyCS-502 Fall 200742 Another approach: Test & Set Instruction (built into CPU hardware) static int lock = 0; extern int TestAndSet(int &i); /* sets the value of i to 1 and returns the previous value of i. */ void enter_region(int &lock) { while (TestAndSet(lock) == 1) /* loop */ ; }; void leave_region(int &lock) { lock = 0; };

43 ConcurrencyCS-502 Fall 200743 Test & Set Instruction (built into CPU hardware) static int lock = 0; extern int TestAndSet(int &i); /* sets the value of i to 1 and returns the previous value of i. */ void enter_region(int &lock) { while (TestAndSet(lock) == 1) /* loop */ ; }; void leave_region(int &lock) { lock = 0; }; What about this solution?

44 ConcurrencyCS-502 Fall 200744 Variations Compare & Swap (a, b) temp = b b = a a = temp return(a == b) … A whole mathematical theory about efficacy of these operations All require extraordinary circuitry in processor memory, and bus to implement atomically

45 ConcurrencyCS-502 Fall 200745 With OS assistance Implement an abstraction:– –A data type called semaphore Non-negative integer values. –An operation wait_s(semaphore &s) such that if s > 0, atomically decrement s and proceed. if s = 0, block the computation until some other computation executes post_s(s). –An operation post_s(semaphore &s):– Atomically increment s; if one or more computations are blocked on s, allow precisely one of them to unblock and proceed.

46 ConcurrencyCS-502 Fall 200746 Critical Section control with Semaphore static semaphore mutex = 1; Process 1 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region1(); }; Process 2 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region2(); };

47 ConcurrencyCS-502 Fall 200747 Critical Section control with Semaphore static semaphore mutex = 1; Process 1 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region1(); }; Process 2 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region2(); }; Does this meet the requirements for controlling access to critical sections?

48 ConcurrencyCS-502 Fall 200748 Semaphores – History Introduced by E. Dijkstra in 1965. wait_s() was called P() Initial letter of a Dutch word meaning “test” post_s() was called V() Initial letter of a Dutch word meaning “increase”

49 ConcurrencyCS-502 Fall 200749 Abstractions The semaphore is an example of a new and powerful abstraction defined by OS I.e., a data type and some operations that add a capability that was not in the underlying hardware or system. Any program can use this abstraction to control critical sections and to create more powerful forms of synchronization among computations.

50 ConcurrencyCS-502 Fall 200750 Process and semaphore data structures class State { long int PSW; long int regs[R]; /*other stuff*/ } class PCB { PCB *next, prev, queue; State s; PCB (…); /*constructor*/ ~PCB(); /*destructor*/ } class Semaphore { int count; PCB *queue; friend wait_s(Semaphore &s); friend post_s(Semaphore &s); Semaphore(int initial); /*constructor*/ ~Semaphore(); /*destructor*/ }

51 ConcurrencyCS-502 Fall 200751 Implementation Ready queue PCB Semaphore A count = 0 PCB Semaphore B count = 2

52 ConcurrencyCS-502 Fall 200752 Implementation Action – dispatch a process to CPU Remove first PCB from ready queue Load registers and PSW Return from interrupt or trap Action – interrupt a process Save PSW and registers in PCB If not blocked, insert PCB into ReadyQueue (in some order) Take appropriate action Dispatch same or another process from ReadyQueue Ready queue PCB

53 ConcurrencyCS-502 Fall 200753 Implementation – Semaphore actions Action – wait_s(Semaphore &s) Implement as a Trap (with interrupts disabled) if (s.count == 0) –Save registers and PSW in PCB –Queue PCB on s.queue –Dispatch next process on ReadyQueue else –s.count = s.count – 1; –Re-dispatch current process Ready queue PCB Event wait

54 ConcurrencyCS-502 Fall 200754 Implementation – Semaphore actions Action – post_s(Semaphore &s) Implement as a Trap (with interrupts disabled) if (s.queue != null) –Save current process in ReadyQueue –Move first process on s.queue to ReadyQueue –Dispatch some process on ReadyQueue else –s.count = s.count + 1; –Re-dispatch current process Ready queue PCB Semaphore A count = 0 PCB Event completion

55 ConcurrencyCS-502 Fall 200755 Interrupt Handling (Quickly) analyze reason for interrupt Execute equivalent post_s to appropriate semaphore as necessary Implemented in device-specific routines Critical sections Buffers and variables shared with device managers More about interrupt handling later in the course

56 ConcurrencyCS-502 Fall 200756 Timer interrupts Can be used to enforce “fair sharing” Current process goes back to ReadyQueue After other processes of equal or higher priority Simulates concurrent execution of multiple processes on same processor

57 ConcurrencyCS-502 Fall 200757 Definition – Context Switch The act of switching from one process to another E.g., upon interrupt or some kind of wait for event Not a big deal in simple systems and processors Very big deal in large systems such Linux and Windows Pentium 4 Many microseconds!

58 ConcurrencyCS-502 Fall 200758 Complications for Multiple Processors Disabling interrupts is not sufficient for atomic operations Semaphore operations must themselves be implemented in critical sections Queuing and dequeuing PCB’s must also be implemented in critical sections Other control operations need protection These problems all have solutions but need deeper thought!

59 ConcurrencyCS-502 Fall 200759 Summary Interrupts transparent to processes wait_s() and post_s() behave as if they are atomic Device handlers and all other OS services can be embedded in processes All processes behave as if concurrently executing Fundamental underpinning of all modern operations systems

60 ConcurrencyCS-502 Fall 200760 Questions?


Download ppt "ConcurrencyCS-502 Fall 20071 Concurrency CS-502 Operating Systems Fall 2007 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz,"

Similar presentations


Ads by Google