Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 3:Processes Book: Operating System Principles , 9th Edition , Abraham Silberschatz, Peter Baer Galvin, Greg Gagne.

Similar presentations


Presentation on theme: "Chapter 3:Processes Book: Operating System Principles , 9th Edition , Abraham Silberschatz, Peter Baer Galvin, Greg Gagne."— Presentation transcript:

1 Chapter 3:Processes Book: Operating System Principles , 9th Edition , Abraham Silberschatz, Peter Baer Galvin, Greg Gagne

2 Process Concept An operating system executes a variety of programs:
Batch system – jobs Time-shared systems – user programs or tasks books uses the terms job and process almost interchangeably Process – a program in execution; process execution must progress in sequential fashion A process includes: program counter Stack data section Heap There may be multiple processes associated with single program , data, heap and stack section vary e.g. multiple web browsers running

3 Process in Memory

4 Two-State Process Model
Process may be in one of two states Running Not-running

5 Not-Running Process in a Queue

6 Process State As a process executes, it changes state
new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a process terminated: The process has finished execution

7 Diagram of Process State

8 Five-State Process Model

9 Using Two Queues

10

11 Suspended Processes Processor is faster than I/O so all processes could be waiting for I/O Swap these processes to disk to free up more memory Blocked state becomes suspend state when swapped to disk Two new states Blocked, suspend Ready, suspend

12 One Suspend State

13 Two Suspend States

14 Reasons for Process Suspension

15 Process Control Block (PCB)
Information associated with each process Process state Program counter CPU registers CPU scheduling information Memory-management information Accounting information I/O status information

16 Process Control Block (PCB)

17 CPU Switch From Process to Process

18 Process Scheduling The objective of multiprogramming is to have some process running all the times, to maximize CPU utilization, while the objective of time sharing is to switch CPU among processes, so that each user can interact with each program while it is running The process scheduler selects the process from the set of available processes for program execution among the CPU

19 Process Scheduling Queues
Job queue – set of all processes in the system Ready queue – set of all processes residing in main memory, ready and waiting to execute, generally store as a link list Device queues – set of processes waiting for an I/O device, each device has its own device queues Processes migrate among the various queues

20 Ready Queue And Various I/O Device Queues

21 Process Scheduling Queues
A new process in initially placed in the ready queue It waits until it is selected for execution Once CPU is allocated, and is executing, following events can occur: The process could issue an I/O request and then be placed in I/O queue The process could create a new sub process and waits for its termination The process could be removed forcibly, as a result of interrupt, and put back in the ready queue Process switches among waiting and ready queue When process terminates, remove from queue, PCB and resource deallocated

22 Representation of Process Scheduling

23 Schedulers Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU The long term scheduler executes less frequently as compared to short term scheduler

24 Schedulers Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow) The long-term scheduler controls the degree of multiprogramming Processes can be described as either: I/O-bound process – spends more time doing I/O than computations, many short CPU bursts CPU-bound process – spends more time doing computations; few very long CPU bursts The system with the best performance will have a combination of I/O and CPU bound processes

25 Schedulers Some operating system, may introduce intermediate level of scheduling, called medium term scheduler It can be advantageous to remove process from the memory and thus reduce the degree of multiprogramming Later process can be reintroduced into memory, and execution can be continued where it is left off This scheme is called swapping, process is swap out and swap in

26 Addition of Medium Term Scheduling

27 Context Switch When CPU switches to another process, the system must save the state of the old process in its PCB and load the saved state for the new process, this is called context switch Context-switch time is overhead; the system does no useful work while switching Its speed varies from machine to machine (memory speed, number of registers that must be copied, existence of special instructions) Time dependent on hardware support (processor support multiple set of registers, each process is stored in one register, in context switch, simply point to that register)

28 Process Creation Parent process create children processes using create system call, which, in turn create other processes, forming a tree of processes Each process is identified by its unique identifier Resource sharing Parent and children share all resources Children share subset of parent’s resources (prevent any process to overload the system by creating too many processes) Parent and child share no resources Execution Parent and children execute concurrently Parent waits until children terminate

29 Process Creation Initialization data (input) may be passed from the parent process to the child process e.g. open a file Address space Child duplicate of parent Child has a program loaded into it UNIX examples fork system call creates new process exec system call used after a fork to replace the process’ memory space with a new program

30 Process Creation

31 C Program Forking Separate Process
int main() { Pid_t pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls", "ls", NULL); else { /* parent process */ /* parent will wait for the child to complete */ wait (NULL); printf ("Child Complete"); exit(0);

32 A tree of processes on a typical Solaris

33 Process Termination Process executes last statement and asks the operating system to delete it by using exit system call Output data from child to parent (via wait) Process’ resources are deallocated by operating system Parent may terminate execution of children processes (abort) Child has exceeded allocated resources (parent inspect the state of children) Task assigned to child is no longer required If parent is exiting Some operating system do not allow child to continue if its parent terminates All children terminated - cascading termination initiated by operating system

34 Interprocess Communication
Process executing concurrently in the operating system may either be independent process or cooperating process Independent Process – the process which cannot affect or be affected by the other processes executing in the system Cooperating Process – the process which can affect or be affected by other processes executing in the system e.g. process sharing data with other processes

35 Interprocess Communication
Advantages of Process Cooperation Information sharing Computation Speed (achieved through multiple CPU’s or I/O devices) Modularity Convenience May require Interprocess communication mechanism that allow them to share data and information

36 Interprocess Communication
Two fundamentals models of Interprocess communication Shared Memory Model Massing Passing Model

37 Shared Memory Model Communication processes establish a region of shared memory, typically resides in the address space of the process creating the shared memory segment The processes who wish to communicate must attach to the address space Two or more processes agree to remove the restriction to share memory in order to exchange information by reading and writing data in the shared areas Form of data and location determined by the processes, not under the operating system control Processes are also responsible for ensuring that they are not writing data in the same location simultaneously

38 Producer Consumer Problem
Paradigm for cooperating process A producer process produces information that is being consumed by the consumer process e.g. compiler produce assembly code being consumed by assembler Useful metaphor for client server paradigm Uses shared memory, run concurrently Must have buffer of items filled by producer and emptied by consumer The producer and consumer must be synchronized (do not try to consume an item that have not been yet produced)

39 Contd… Two types of buffer can be used:
Unbounded Buffer: no limit on the size of the buffer, producer always produce new items, consumer may have to wait for new item Bounded Buffer: a fixed size buffer, consumer must wait if buffer is empty, and producer must wait if the buffer is full

40 Bounded-Buffer – Shared-Memory Solution
Shared data #define BUFFER_SIZE 10 Typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; //next free position int out = 0; //first full position Solution is correct, but can only use BUFFER_SIZE-1 elements

41 Bounded-Buffer – Insert() Method
while (true) { /* Produce an item */ while (((in = (in + 1) % BUFFER SIZE count) == out) ; /* do nothing -- no free buffers */ buffer[in] = item; in = (in + 1) % BUFFER SIZE; {

42 Bounded Buffer – Remove() Method
while (true) { while (in == out) ; // do nothing -- nothing to consume // remove an item from the buffer item = buffer[out]; out = (out + 1) % BUFFER SIZE; return item; {

43 Message Passing System
Provide the means for cooperating processes Useful for distributed environment Provides at least two operations: send(message) : can be fixed size or variable size receive(message) If P and Q wish to communicate, they need to: establish a communication link between them exchange messages via send/receive Implementation of communication link physical (e.g., shared memory, hardware bus) logical (e.g., logical properties)

44 Communications Models

45 Direct Communication Processes must name each other explicitly:
send (P, message) – send a message to process P receive(Q, message) – receive a message from process Q Properties of communication link Links are established automatically A link is associated with exactly one pair of communicating processes Between each pair there exists exactly one link The link may be unidirectional, but is usually bi-directional This is symmetric scheme in addressing

46 Contd… In asymmetry addressing, only the sender names the recipient, the recipient is not required to name the sender send (P, message) – send a message to process P Receive(id, message) – receive a message from any process, the variable id is set to the name of the process with which the communication takes place Hard coding techniques are less desirable

47 Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports) Mailbox can be thought of as an object into which messages can be placed by process and from messages can be removed Each mailbox has a unique id Processes can communicate only if they share a mailbox Properties of communication link Link established only if processes share a common mailbox A link may be associated with many processes Each pair of processes may share several communication links Link may be unidirectional or bi-directional

48 Contd… A mailbox may be either owned by a process or by the operating system Operations create a new mailbox send and receive messages through mailbox destroy a mailbox Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive a message from mailbox A

49 Indirect Communication
Mailbox sharing P1, P2, and P3 share mailbox A P1, sends; P2 and P3 receive Who gets the message? Solutions Allow a link to be associated with at most two processes Allow only one process at a time to execute a receive operation Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

50 Synchronization Message passing may either be blocking or non blocking, also known as synchronous or asynchronous Blocking is considered synchronous Blocking send has the sender block until the message is received Blocking receive has the receiver block until a message is available Non-blocking is considered asynchronous Non-blocking send has the sender send the message and continue Non-blocking receive has the receiver receive a valid message or null

51 Buffering Queues can be implemented in three ways :
Zero Capacity: maximum length of zero, the link cannot have any messages waiting in it, the sender must block until recipient receives the message, also refer to as a message system with no buffering Bounded Capacity: finite length n, if the link is full , the sender must block until space is available in the queue Unbounded Capacity: length is potentially infinite, any number of messages can wait in it, the sender never blocks Bounded and unbounded are refer to as automatic bufferering

52 Client Server Communication
Communication in client server system can be done through Socket Remote Procedure Call Remote Method Invocation (RMI)

53 Client Server System Communication in client server system can be done through Socket Remote Procedure Call Remote Method Invocation (RMI)

54 Sockets A socket is defined as an endpoint for communication
Identified by concatenation of IP address and port The socket :1625 refers to port 1625 on host Communication consists between a pair of sockets, one socket for each process The IP address is special IP address known as loopback, when computer refers to it, it is referring to itself Sockets only allow unstructured stream of bytes to be exchanged between communicating threads

55 Socket Communication

56 Remote Procedure Calls
Remote procedure call (RPC) abstracts procedure calls between processes on networked systems. The messages exchanged using RPC mechanism are well structured Each message is addressed to RPC daemon listings to port on the remote system Each contain the identifiers of the function to execute, and parameters pass to that function The function is then executed and is then sent back o the requester in a separate message

57 Contd… A system can have many ports on one network address to differentiate the network services it supports Stubs – client-side proxy for the actual procedure on the server. The client-side stub locates the server and marshalls the parameters. Parameters marshalling involves packaging the parameters into the form that can be transmitted over the network The server-side stub receives this message, unpacks the marshalled parameters, and performs the procedure on the server.

58 Contd… Issues Dealt with concern differences in data representation in client and server machine (e.g. big endian and little endian) Solution: many RPC systems define a machine independent representation of data, one such representation is external data representation (XDR) Other issues involve semantics of the call, RPC can fail, or be duplicated, execute more than once because of network error Solution: OS ensure that message executed exactly once For “at most once”, ensure by attaching a timestamp to each message For “exactly once”, server must implement the at most once protocol

59 Contd… Another important issue concern communication between client and server, how does a client know about a port number on the server Solution: The binding information may be predetermined, in the form of fixed port addresses, once a program is compiled, the server cannot change the port number of the requested service Binding can be done dynamically by rendezvous mechanism, client sends the message to know the port number, requires extra overhead but more flexible

60 Execution of RPC

61 Remote Method Invocation
Remote Method Invocation (RMI) is a Java mechanism similar to RPCs. RMI allows a Java program on one machine to invoke a method on a remote object. Remote object can be different java virtual machine on the same computer or on different system

62 Marshalling Parameters

63 Behavior of Parameter Passing
If the marshalled parameters are local (or nonremote) objects, they are passed by copy using a technique known as object serialization. However, if the parameters are also remote objects, they are passed by reference. Object serialization allows the state of an object to be written to a byte stream.

64 Difference between RPC and RMI
RPC support procedural programming (remote procedures and functions called) , where RMI is object based (invocation of methods on remote objects) The parameters passed to remote procedure are ordinary data structure, whereas in RMI, it is possible to pass the objects as parameters on remote objects

65 ?


Download ppt "Chapter 3:Processes Book: Operating System Principles , 9th Edition , Abraham Silberschatz, Peter Baer Galvin, Greg Gagne."

Similar presentations


Ads by Google