1 Friday, June 16, 2006 "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous.

Slides:



Advertisements
Similar presentations
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Advertisements

Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Race Conditions Critical Sections Dekker’s Algorithm.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Processes and Threads.
Concurrency, Mutual Exclusion and Synchronization.
Processes: program + execution state
3.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Interprocess Communication Processes within a system may be.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Process Synchronization.  Whenever processes share things, there is a chance for screwing up things  For example ◦ Two processes sharing one printer.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Process Synchronization Tanenbaum Ch 2.3, 2.5 Silberschatz Ch 6.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Processes. Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
4.1 Introduction to Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
1 Module 3: Processes Reading: Chapter Next Module: –Inter-process Communication –Process Scheduling –Reading: Chapter 4.5, 6.1 – 6.3.
Topic 3 (Textbook - Chapter 3) Processes
Chapter 5: Process Synchronization
Background on the need for Synchronization
Chapter 3: Process Concept
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Lecture 19 Syed Mansoor Sarwar
Introduction to Cooperating Processes
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Lecture 18 Syed Mansoor Sarwar
Chapter 6: Synchronization Tools
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Chapter 3: Process Concept
Presentation transcript:

1 Friday, June 16, 2006 "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous

2 §g++ -R/opt/sfw/lib somefile.c to add /opt/sfw/lib to the runtime shared library lookup path Edit your.bash_profile and add /opt/sfw/lib to LD_LIBRARY_PATH

3 Scheduling in Unix - other versions also possible §Designed to provide good response to interactive processes §Uses multiple queues §Each queue is associated with a range of non-overlapping priority values

4 Scheduling in Unix - other versions also possible §Processes executing in user mode have positive values §Processes executing in kernel mode (doing system calls) have negative values §Negative values have higher priority and large positive values have lowest

5 Scheduling in Unix §Only processes that are in memory and ready to run are located on queues §Scheduler searches the queues starting at highest priority §first process is chosen on that queue and started. It runs for one time quantum (say 100ms) or until it blocks. §If the process uses up its quantum it is blocked §Processes within same priority range share CPU in RR

6 Scheduling in Unix §Every second each process’s priority is recalculated (usually based on CPU usage) and it is attached to appropriate queue. §CPU_usage decay

7 Scheduling in Unix §Process might have to block before system call is complete. While waiting it is put in a queue with a negative number. (determined by the event it is waiting for) Reason: l Allow process to run immediately after each request is completed, so that it make the next one quickly l If it is waiting for terminal input it is an interactive process. §CPU bound get service when all I/O bound and interactive processes are blocked

8 Unix scheduler is based on multi-level queue structure

9  top provides an ongoing look at processor activity in real time  nice default value is zero in UNIX l Allowed range -20 to 20

10 Windows §Priority based preemptive scheduling §Selected thread runs until: l Preempted by a higher priority thread l Time quantum ends l Calls a blocking system call l Terminates

11 Win32 API §SetPriorityClass sets the priority of all threads in the caller’s process. l Real time, high, above normal, normal, below normal and base §SetThreadPriority sets the priority of a thread compared to other threads in its process l Time critical, highest, above normal, normal, below normal, lowest, idle

12 Win32 API §How many combinations? §System has 32 priorities

13 RR for multiple threads at same priority

14 §Selection of thread irrespective of the process it belongs to. §Priorities are called real time, but there are not. §Priorities are reserved for the system itself

15 §Ordinary users are not allowed those priorities. Why? §Users run at priorities 1-15

16 Windows §Priority lowering depends on a thread’s time quantum §Priority boosting: l When a thread is released from a wait operation Thread waiting for keyboard Thread waiting for disk operation l Good response time for interactive threads

17 Windows l Currently active window gets a boost to increase its response time l Also keeps track of when a ready thread ran last l Priority boosting does not go above priority 15.

18 Example: Multiprogramming §5 jobs with 80% I/O wait §If one of theses jobs enter the system and is the only process there then it uses 12 seconds of CPU time for each minute. §CPU busy: 20% of time. §If that job needs 4 minutes of CPU time it will require at least 20 minutes in all to get the job done

19 Example: Multiprogramming §Too simplistic model: l If average job computes for only 20% of time, then with five such processes CPU should be busy all the time. l BUT…

20 Example: Multiprogramming §But we are assuming all five will not be waiting for I/O all the time.  CPU utilization: 1-p n n = number of processes p= probability that all n is waiting for I/O at the same time Approximation only: There might be dependencies between processes

21 CPU utilization as a function of number of processes in memory

22 Producer-Consumer Problem §Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process. l unbounded-buffer places no practical limit on the size of the buffer. l bounded-buffer assumes that there is a fixed buffer size.

23 Bounded-Buffer – Shared-Memory Solution §Shared data var n; type item = … ; var buffer. array [0..n–1] of item; in, out: 0..n–1;

24 Bounded-Buffer – Shared-Memory Solution §Producer process repeat … produce an item in nextp … while in+1 mod n = out do no-op; buffer [in] :=nextp; in :=in+1 mod n; until false;

25 Bounded-Buffer (Cont.) §Consumer process repeat while in = out do no-op; nextc := buffer [out]; out := out+1 mod n; … consume the item in nextc … until false;

26 Bounded-Buffer (Cont.) §Solution is correct, but …?

27 Bounded-Buffer §Shared data #define BUFFER_SIZE 10 typedef struct {... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; int counter = 0;

28 Bounded-Buffer §Producer process item nextProduced; while (1) { while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; }

29 Bounded-Buffer §Consumer process item nextConsumed; while (1) { while (counter == 0) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; }

30 Bounded Buffer §The statements counter++; counter--; must be performed atomically. §Atomic operation means an operation that completes in its entirety without interruption.

31 Bounded Buffer §The statement “count++” may be implemented in machine language as: register1 = counter register1 = register1 + 1 counter = register1 §The statement “count - -” may be implemented as: register2 = counter register2 = register2 – 1 counter = register2

32 Process Synchronization §Cooperating processes executing in a system can affect each other §Concurrent access to shared data may result in data inconsistency §Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes

33 Race Condition §Problem in previous example: One process started using shared variable before another process was finished using it §Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. §To prevent race conditions, concurrent processes must be synchronized.

34 Two processes want to access shared memory at the same time Debugging very difficult...

35 Example...

36 Solution #1

37 Solution #2

38 Solution #3

39 The Critical-Section Problem §n processes all competing to use some shared data §Each process has a code segment, called critical section, in which the shared data is accessed. §Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.

40

41 §No processes may be simultaneously inside their critical sections §No assumptions may be made about the speeds or number of CPUs §No process running outside the critical region may block other processes §No process waits forever to enter its critical section

42 §Only 2 processes, P 0 and P 1 §General structure of process P i (other process P j ) do { entry section critical section exit section remainder section } while (1); §Processes may share some common variables to synchronize their actions.

43 Solution to Critical-Section Problem 1.Mutual Exclusion. If process P i is executing in its critical section, then no other processes can be executing in their critical sections.

44 Solution to Critical-Section Problem 2.Progress. If no process is executing in its critical section and some processes wish to enter their critical section, then only those processes that are not executing in the remainder section can participate in the decision on which will enter critical section next.

45 Solution to Critical-Section Problem 3.Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. §Assume that each process executes at a nonzero speed §No assumption concerning relative speed of the n processes.

46 §Assume the basic machine language instructions like load or store etc. are executed atomically.

47 Algorithm 1 turn is initialized to 0 (or 1)