Presentation is loading. Please wait.

Presentation is loading. Please wait.

Process Management.

Similar presentations


Presentation on theme: "Process Management."— Presentation transcript:

1 Process Management

2 A process can be thought of as a program in execution.
A process will need certain resources—such as CPU time,memory, files, and I/O devices—to accomplish its task. These resources are allocated to the process by the OS either when it is created or while it is executing. Although traditionally a process contained only a single thread of control as it runs. most modern operating systems now support processes that have multiple threads.

3 The operating system is responsible for the following activities in connection with process and thread management: the creation and deletion of both user and system processes the scheduling of processes the provision of mechanisms for synchronization, communication deadlock handling for processes.

4 A process is more than the program code, which is sometimes known as the text section.
It also includes the current activity, as represented by the value of the program counter and the contents of the processor’s registers. A process generally also includes the process stack, which contains temporary data. A process may also include a heap,which is memory that is dynamically allocated during process run time.

5

6 Process State: As a process executes, it changes state.
The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states: New:The process is being created. Running: Instructions are being executed. Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a signal). Ready:The process is waiting to be assigned to a processor. Terminated: The process has finished execution.

7

8 Process Control Block:
Each process is represented in the operating system by a process control block(PCB)—also called a task control block. It contains many pieces of information associated with a specific process, including these:

9 CPU-scheduling information.
Memory-management information. Accounting information. This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers,and so on. I/O status information. This information includes the list of I/O devices allocated to the process, a list of open files, and so on.

10 Process Scheduling The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process (possibly from a set of several available processes) for program execution on the CPU. For a single-processor system, there will never be more than one running process. If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled.

11

12 Schedulers The operating system must select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried out by the appropriate scheduler. The long-term scheduler, or job scheduler, selects processes from this pool and loads them into memory for execution. The short-term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and allocates the CPU to one of them.

13 Context Switch interrupts cause the operating system to change a CPU from its current task and to run a kernel routine. When an interrupt occurs, the system needs to save the current context of the process running on the CPU. so that it can restore that context when its processing is done, essentially suspending the process and then resuming it. The context is represented in the PCB of the process. Generically, we perform a state-save of the current state of the CPU, be it in kernel or user mode, and then a state-restore to resume operations.

14 Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB. and loads the saved context of the new process scheduled to run. Context-switch time is pure overhead, because the system does no useful work while switching.

15 Inter-process Communication
Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.

16 Why should processes co-operate?
There are several reasons for providing an environment that allows process cooperation: Information sharing. Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate processes. Convenience.

17 Cooperating processes require an interprocess communication (IPC)mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication: shared memory message passing In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region.

18 In the message passing model, communication takes place by means of messages exchanged between the cooperating processes. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. Shared memory allows maximum speed and convenience of communication.

19

20 Shared-Memory Systems
Interprocess communication using shared memory requires communicating processes to establish a region of shared memory. Typically, a shared-memory region resides in the address space of the process creating the shared-memory segment. Other processes thatwish to communicate using this shared-memory segment must attach it to their address space. They can then exchange information by reading and writing data in the shared areas. The processes are also responsible for ensuring that they are not writing to the same location simultaneously.

21 A producer process produces information that is consumed by a consumer process.
To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by the producer and emptied by the consumer. This buffer will reside in a region of memory that is shared by the producer and consumer processes. A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced.

22 Unbounded buffer places no practical limit on the size of the buffer.
The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.

23 Message-Passing Systems
particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. Amessage-passing facility provides at least two operations: send(message) and receive(message). Messages sent by a process can be of either fixed or variable size. a communication link must exist between them. Direct or indirect communication Synchronous or asynchronous communication Automatic or explicit buffering

24 Each process is supposed to have a name
Ex: send(P, message)—Send a message to process P. receive(Q, message)—Receive a message from process Q. With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. send(A, message)—Send a message to mailbox A. receive(A, message)—Receive a message from mailbox A.

25 A mailbox may be owned either by a process or by the operating system.
A process that creates a mailbox to receive the messages is the owner. The process which puts messages into the mailbox is the user.

26 CPU Scheduling CPU scheduling is the basis of multiprogrammed operating system By switching the CPU among processes, the operating system can make the computer more productive. CPU-scheduling decisions may take place under certain circumstances:

27 1. When a process switches from the running state to
the waiting state. 2. When a process switches from the running state to the ready state. 3. When a process switches from the waiting state to 4. When a process terminates. When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is nonpreemptive or cooperative. otherwise, it is preemptive.

28 Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler.

29 Scheduling Criteria CPU utilization: In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent(for a heavily used system). 2. One measure of work is the number of processes that are completed per time unit, called throughput. 3. Turnaround time: The interval from the time of submission of a process to the time of completion is the turnaround time. 4. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.

30 Waiting time: The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/O. Waiting time is the sum of the periods spent waiting in the ready queue. Response time: Thus, another measure is the time from the submission of a request until the first response is produced. It is desirable to maximize CPU utilization and throughput and to minimize turn around time, waiting time, and response time.

31 Scheduling Algorithms
First-Come, First-Served Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling

32 First-Come, First-Serve Scheduling
With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds:

33 If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the result shown in the following Gantt chart.

34 The waiting time for P1 is 0 msec.
The average waiting time is now ( )/3 = 17 milliseconds.

35 Shortest-Job-First Scheduling
This algorithm associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. With the same process scenario as above the Gantt Chart is as below: P2 P3 P1

36 The waiting time for P1 is 6 msec.
The average waiting time is now ( )/3 = 3 milliseconds.

37 Priority Scheduling A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order.

38 The Gantt Chart is as follows:

39 The waiting time for P1 is 6 msec.
The average waiting time is now ( )/5 = 8.2 milliseconds.

40 Priority scheduling can be either preemptive or non preemptive
Priority scheduling can be either preemptive or non preemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.

41 Pre-emptive Round-Robin
A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds in length. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. If the CPU burst time < 1 time quantum, the process voluntarily releases the CPU and next process is scheduled.

42 If the CPU burst time > 1 time quantum, the process is pre-empted and next process is scheduled.

43 The following processes arrive in the ready queue
The following processes arrive in the ready queue. Use non-preemptive sceduling to calculate the average waiting time and turn-around time for all the processes with FCFS and SJF sheduling algorithms. Also draw the Gantt chart. Assume that the CPU is idle for the initial 1.0 sec.

44 The waiting time for P1 = 0msec The waiting time for P2 = 8msec
FCFS The waiting time for P1 = 0msec The waiting time for P2 = 8msec The waiting time for P3 = 12msec Average waiting time = (0+8+12)/3 = 6.67msec P1 P2 P3 8msec 4msec 1msec

45 The turn around time for P1 = (8msec - 0msec) = 8msec
P2 = (12msec – 0.4mec ) = 11.6msec P3 = 13msec – 1.0msec = 12msec Avg turn-around-time = (8m m + 12m)/3 = 10.53msec P1 P2 P3 8msec 4msec 1msec

46 The waiting time for P1 = 5msec The waiting time for P2 = 1msec
SJF The waiting time for P1 = 5msec The waiting time for P2 = 1msec The waiting time for P3 = 0msec Average waiting time = (5+1+0)/3 = 2msec P3 P2 P1 1msec 4msec 8msec

47 The turn around time for P1 = (13msec - 0msec) = 13msec
P2 = (5msec – 0.4mec ) = 4.6msec P3 = 2msec – 1.0msec = 1msec Avg turn-around-time = (13m + 4.6m + 1m)/3 = 6.2msec P3 P2 P1 1msec 4msec 8msec

48 Consider the following set of processes, with the length of the CPU burst given in milliseconds:
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0. Draw four Gantt charts that illustrate the execution of these processes using the following scheduling algorithms: FCFS, SJF, non-preemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1).

49 FCFS P1 P2 P3 P4 P5 10msec 5msec 1msec 2msec 1msec

50 SJF P2 P4 P3 P5 P1 1msec 1msec 2msec 5msec 10msec

51 Non-preemptive priority
1msec 5msec 10msec 2msec 1msec


Download ppt "Process Management."

Similar presentations


Ads by Google