Chapter 3: Processes.

Slides:



Advertisements
Similar presentations
©2009 Operačné systémy Procesy. 3.2 ©2009 Operačné systémy Process in Memory.
Advertisements

Abhinav Kamra Computer Science, Columbia University 4.1 Operating System Concepts Silberschatz, Galvin and Gagne  2002 Chapter 4: Processes Process Concept.
Adapted from slides ©2005 Silberschatz, Galvin, and Gagne Lecture 4: Processes.
Chapter 3: Processes.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Chapter 3: Processes Process Concept.
Chapter 3: Processes. Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Chapter 3: Processes Process Concept.
1/26/2007CSCI 315 Operating Systems Design1 Processes Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems.
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Process Concept Process – a program.
AE4B33OSS Chapter 3: Processes. 3.2Silberschatz, Galvin and Gagne ©2005AE4B33OSS Chapter 3: Processes Process Concept Process Scheduling Operations on.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Outline n Process Concept n Process.
Chapter 3: Processes (6 th edition chap 4). 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Chapter 3: Processes. 3.2CSCI 380 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication.
1 11/1/2015 Chapter 4: Processes l Process Concept l Process Scheduling l Operations on Processes l Cooperating Processes l Interprocess Communication.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
CS212: OPERATING SYSTEM Lecture 2: Process 1. Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 3 Operating Systems.
Chapter 3: Process-Concept. Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating.
3.1 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Overview: Process Concept Process Scheduling Operations on Processes.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Jan 19, 2005 Chapter 3: Processes Process Concept.
Chapter 3: Processes-Concept. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes-Concept Overview Process Scheduling.
Processes. Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server.
Chapter 3: Process-Concept. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 3: Process-Concept Process Concept Process Scheduling.
4.1 Operating System Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Chapter 3: Processes Process Concept.
XE33OSA Chapter 3: Processes. 3.2XE33OSASilberschatz, Galvin and Gagne ©2005 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes.
Chapter 3: Processes.
Lecture 3 Process.
Chapter 3: Process Concept
Topic 3 (Textbook - Chapter 3) Processes
Operating System Concepts
Chapter 4: Processes Process Concept Process Scheduling
Processes Overview: Process Concept Process Scheduling
Chapter 3: Process Concept
Chapter 3: Processes.
Chapter 3: Processes.
Applied Operating System Concepts
Chapter 3: Processes.
Chapter 4: Processes Process Concept Process Scheduling
Lecture 2: Processes Part 1
Chapter 3: Processes.
Recap OS manages and arbitrates resources
Chapter 3: Processes.
Chapter 3: Processes.
Chapter 3: Processes.
Operating System Concepts
Chapter 3: Processes.
Chapter 3: Processes.
“Processes” By: Maryam Raeisian- 2014
Chapter 3: Processes.
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Chapter 3: Process Concept
Processes August 10, 2019 OS:Processes.
Presentation transcript:

Chapter 3: Processes

Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems

Process Concept An operating system executes a variety of programs: Batch system – jobs Time-shared systems – user programs or tasks Textbook uses the terms job and process almost interchangeably Process – a program in execution; process execution must progress in sequential fashion A process includes: program counter which represents current activity Stack which holds temporary data such as function parameters, local variables and return addresses. data section which contains global variables Heap which contains memory that is dynamically allocated during process runtime. Text includes the application's code, might be shared by a number of processes? The process's address space.

Process in Memory

Process State As a process executes, it changes state new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a processor terminated: The process has finished execution

Diagram of Process State

Process Control Block (PCB) Information associated with each process Process state Program counter indicates the address of next instruction. CPU registers CPU scheduling information includes process priority and pointers to scheduling queues. Memory-management information includes value of limit and base registers Accounting information includes amount of CPU time used, time limits etc. I/O status information includes the list of I/O devices allocated to processes, list of open files

Process Control Block (PCB)

General Purpose Registers PCB OS Data Structures Process Address Space RAM CPU Kernel User PSW State Memory Files Accounting Priority User CPU register storage text IR . PC data SP heap General Purpose Registers stack

CPU Switch From Process to Process

Process Scheduling Queues Job queue – set of all processes in the system Ready queue set of all processes residing in main memory, ready and waiting to execute. It is stored as a linked list. It contains pointers to the first and final PCB’s. Each PCB includes a pointer field that points to the next PCB in the ready queue. Device queues – set of processes waiting for an I/O device Processes migrate among the various queues

Ready Queue And Various I/O Device Queues

Representation of Process Scheduling

Schedulers Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU Medium term Scheduler (Swaper)

Addition of Medium Term Scheduling

Schedulers (Cont.) Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow) The long-term scheduler controls the degree of multiprogramming Processes can be described as either: I/O-bound process – spends more time doing I/O than computations, many short CPU bursts CPU-bound process – spends more time doing computations; few very long CPU bursts If all processes are I/O bound, The ready queue will almost always be empty If all processes are CPU bound, The I/O waiting queue will almost always be empty, devices will go unused System will again be unbalanced

Context Switch When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process Context-switch time is overhead; the system does no useful work while switching Time dependent on hardware support Part of OS responsible for switching the processor among the processes is called Dispatcher

Two-state process model Processor Queue Enter Dispatch Exit Pause Dispatcher is now redefined: Moves processes to the waiting queue Remove completed/aborted processes Select the next process to run

How process state changes1 Ready Ready Ready  a := 1 b := a + 1 c := b + 1 read a file a := b - c c := c * b b := 0  a := 1 read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0  a := 1 b := a + 1 c := b + 1 a := b - c c := c * b b := 0 c := 0

How process state changes2 Running Ready Ready  a := 1 b := a + 1 c := b + 1 read a file a := b - c c := c * b b := 0  a := 1 read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0  a := 1 b := a + 1 c := b + 1 a := b - c c := c * b b := 0 c := 0 a := 1  b := a + 1 c := b + 1 read a file a := b - c c := c * b b := 0 a := 1 b := a + 1  c := b + 1 read a file a := b - c c := c * b b := 0 Timeout

How process state changes3 Ready Running Ready a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0  a := 1 read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0  a := 1 b := a + 1 c := b + 1 a := b - c c := c * b b := 0 c := 0 a := 1  read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0 I/O

How process state changes4 Ready Blocked Running a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0 a := 1  read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0  a := 1 b := a + 1 c := b + 1 a := b - c c := c * b b := 0 c := 0 a := 1  b := a + 1 c := b + 1 a := b - c c := c * b b := 0 c := 0 a := 1 b := a + 1  c := b + 1 a := b - c c := c * b b := 0 c := 0 Timeout

How process state changes5 Running Blocked Ready a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0 a := 1  read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0 a := 1 b := a + 1 c := b + 1  a := b - c c := c * b b := 0 c := 0 I/O

How process state changes6 Blocked Blocked Running The Next Process to Run cannot be simply selected from the front a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0 a := 1  read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0 a := 1 b := a + 1 c := b + 1  a := b - c c := c * b b := 0 c := 0

How process state changes7 Blocked Blocked Running a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0 a := 1  read a file b := a + 1 c := b + 1 a := b - c c := c * b b := 0 a := 1 b := a + 1 c := b + 1  a := b - c c := c * b b := 0 c := 0 a := 1 b := a + 1 c := b + 1 a := b - c  c := c * b b := 0 c := 0 Suppose the Green process finishes I/O

How process state changes8 Blocked Ready Running a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0 a := 1 read a file  b := a + 1 c := b + 1 a := b - c c := c * b b := 0 a := 1 b := a + 1 c := b + 1  a := b - c c := c * b b := 0 c := 0 a := 1 b := a + 1 c := b + 1 a := b - c  c := c * b b := 0 c := 0 a := 1 b := a + 1 c := b + 1 a := b - c c := c * b  b := 0 c := 0 Timeout

How process state changes9 Blocked Running Ready a := 1 b := a + 1 c := b + 1  read a file a := b - c c := c * b b := 0 a := 1 read a file  b := a + 1 c := b + 1 a := b - c c := c * b b := 0 a := 1 b := a + 1 c := b + 1 a := b - c c := c * b b := 0  c := 0 a := 1 read a file b := a + 1  c := b + 1 a := b - c c := c * b b := 0 a := 1 read a file b := a + 1 c := b + 1  a := b - c c := c * b b := 0 Timeout

Process Creation Reasons to create a process Submit a new batch job/Start program User logs on to the system OS creates on behalf of a user (printing) Spawned by existing process Parent process create children processes, which, in turn create other processes, forming a tree of processes Resource sharing Parent and children share all resources Children share subset of parent’s resources Parent and child share no resources Execution Parent and children execute concurrently Parent waits until children terminate

Process Creation (Cont.) Address space Child duplicate of parent Child has a program loaded into it UNIX examples fork system call creates new process exec system call used after a fork to replace the process’ memory space with a new program

Process Creation

C Program Forking Separate Process int main() { Pid_t pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls", "ls", NULL); else { /* parent process */ /* parent will wait for the child to complete */ wait (NULL); printf ("Child Complete"); exit(0);

The fork() system call At the end of the system call there is a new process waiting to run once the scheduler chooses it A new data structure is allocated The new process is called the child process. The existing process is called the parent process. The parent gets the child’s pid returned to it. The child gets 0 returned to it. Both parent and child execute at the same point after fork() returns

Unix Process Control The fork syscall returns a zero to the child and the child process ID to the parent int pid; int status = 0; if (pid = fork()) { /* parent */ …… pid = wait(&status); } else { /* child */ exit(status); } Parent uses wait to sleep until the child exits; wait returns child pid and status. Wait variants allow wait on a specific child, or notification of stops and other signals Fork creates an exact copy of the parent process Child process passes status back to parent on exit, to report success/failure

Process Termination Batch job issues Halt instruction User logs off Process executes a service request to terminate Parent terminates so child processes terminate Operating system intervention such as when deadlock occurs Error and fault conditions E.g. memory unavailable, protection error, arithmetic error, I/O failure, invalid instruction

Process Termination Process executes last statement and asks the operating system to delete it (exit) Output data from child to parent (via wait) Process’ resources are deallocated by operating system Parent may terminate execution of children processes (abort) Child has exceeded allocated resources Task assigned to child is no longer required If parent is exiting Some operating system do not allow child to continue if its parent terminates All children terminated - cascading termination

Inter process Communication Processes executing concurrently in the operating system may be either independent processes or cooperating processes. Independent process cannot affect or be affected by the execution of another process Cooperating process can affect or be affected by the execution of another process

Advantages of Process Cooperation Information sharing: Allow concurrent access to same piece of information that several users may be interested in it. Computation speed-up: break a task to subtasks, each of which will be executing in parallel with the others. (Should be multiple processing elements such as CPUs) Modularity: divided the system functions into separate processes or threads. Convenience: for user, user may work on many tasks at the same time.

MODELS OF IPC Cooperating processes require an inter process communication mechanism that will allow them to exchange data and information. Shared Memory Message Passing

SHARED MEMORY Region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to shared region

MESSAGE PASSING Communication takes place by means of messages exchanged between the cooperating processes.

Communications Models

Advantages & Disadvantages of Message Passing & Shared Memory Message passing is useful for exchanging smaller amounts of data Message Passing is easier to implement than shared memory for IPC Shared Memory allows maximum speed and convenience as it can be done at memory speeds within a computer Shared memory is faster than message passing as message passing is implemented using system calls.

Contd…. Message passing requires the more time consuming task of kernel intervention. Shared memory system calls are required only to establish shared memory regions, no assistance from the kernel is required.

Producer-Consumer Problem Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process A compiler may produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce object modules, which are consumed by the loader A buffer of items that can be filled by the producer and emptied by the consumer. should be available A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized the consumer does not try to consume an item that has not yet been produced. the consumer must wait until an item is produced

Producer-Consumer Problem unbounded-buffer places no practical limit on the size of the buffer the consumer may have to wait for new items, but the producer can always produce new items bounded-buffer assumes that there is a fixed buffer size the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.

Bounded-Buffer – Shared-Memory Solution Shared data #define BUFFER_SIZE 10 Typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; % in: the next free position in the buffer int out = 0; % out: the first full position in the buffer The buffer is empty when in == out ; The buffer is full when ((in + 1) % BUFFERSIZE) == out Solution is correct, but can only use BUFFER_SIZE-1 elements

Bounded-Buffer – Insert() Method The producer process has a local variable nextproduced in which the new item to be produced is stored: while (true) { /* Produce an item */ while (((in = (in + 1) % BUFFER SIZE count) == out) ; /* do nothing -- no free buffers */ buffer[in] = item; in = (in + 1) % BUFFER SIZE; {

Bounded Buffer – Remove() Method The consumer process has a local variable nextconsumed in which the item to be consumed is stored: while (true) { while (in == out) ; // do nothing -- nothing to consume // remove an item from the buffer item = buffer[out]; out = (out + 1) % BUFFER SIZE; return item; {

Message Passing Mechanism for processes to communicate and to synchronize their actions Message system – processes communicate with each other without resorting to shared variables IPC facility provides two operations: send(message) – message size fixed or variable receive(message) If P and Q wish to communicate, they need to: establish a communication link between them exchange messages via send/receive Implementation of communication link physical (e.g., shared memory, hardware bus) logical (e.g., logical properties) e.g. send(), receive()

Implementation Questions How are links established? Can a link be associated with more than two processes? How many links can there be between every pair of communicating processes? What is the capacity of a link? Is the size of a message that the link can accommodate fixed or variable? Is a link unidirectional or bi-directional?

Methods for logical implementation of link Direct or Indirect Communication Synchronous and Asynchronous Communication Automatic or explicit Buffering

Direct Communication Processes must name each other explicitly: send (P, message) – send a message to process P receive(Q, message) – receive a message from process Q Properties of communication link Links are established automatically because the processes need to know only the identities to communicate. A link is associated with exactly one pair of communicating processes Between each pair there exists exactly one link The link may be unidirectional, but is usually bi-directional

Indirect Communication Messages are directed and received from mailboxes (also referred to as ports) Each mailbox has a unique id Processes can communicate only if they share a mailbox Properties of communication link Link established only if processes share a common mailbox A link may be associated with many processes Each pair of processes may share several communication links Link may be unidirectional or bi-directional

Indirect Communication Operations create a new mailbox send and receive messages through mailbox destroy a mailbox Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive a message from mailbox A

Indirect Communication Mailbox sharing P1, P2, and P3 share mailbox A P1, sends; P2 and P3 receive Who gets the message? Solutions Allow a link to be associated with at most two processes Allow only one process at a time to execute a receive operation Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

Indirect Communication If the mailbox is owned by a process (that is, the mailbox is part of the address space of the process), then we distinguish between the owner (who can only receive messages through this mailbox) and the user (who can only send messages to the mailbox). Since each mailbox has a unique owner, there can be no confusion about who should receive a message sent to this mailbox. When a process that owns a mailbox terminates, the mailbox disappears. Any process that subsequently sends a message to this mailbox must be notified that the mailbox no longer exists.

Indirect Communication If a mailbox owned by the operating system is independent and is not attached to any particular process: The operating system then must provide a mechanism that allows a process to do the following: Create a new mailbox. Send and receive messages through the mailbox. Delete a mailbox Note: the ownership and receive privilege may be passed to other processes through appropriate system calls.

Synchronization Message passing may be either blocking or non-blocking Blocking is considered synchronous Blocking send has the sender block until the message is received Blocking receive has the receiver block until a message is available Non-blocking is considered asynchronous Non-blocking send has the sender send the message and continue Non-blocking receive has the receiver receive a valid message or null

Buffering Queue of messages attached to the link; implemented in one of three ways 1. Zero capacity – 0 messages Sender must wait for receiver (rendezvous) 2. Bounded capacity – finite length of n messages Sender must wait if link full 3. Unbounded capacity – infinite length Sender never waits

Client-Server Communication Sockets Remote Procedure Calls Remote Method Invocation (Java)

Sockets A socket is defined as an endpoint for communication Concatenation of IP address and port The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8 Communication consists between a pair of sockets

Socket Communication

Remote Procedure Calls Remote procedure call (RPC) abstracts procedure calls between processes on networked systems. Stubs – client-side proxy for the actual procedure on the server. The client-side stub locates the server and marshalls the parameters. The server-side stub receives this message, unpacks the marshalled parameters, and performs the procedure on the server. A stub is a small program routine that substitutes for a longer program, possibly to be loaded later or that is located remotely. For example, a program that uses Remote Procedure Calls ( RPC ) is compiled with stubs that substitute for the program that provides a requested procedure. The stub accepts the request and then forwards it (through another program) to the remote procedure. When that procedure has completed its service, it returns the results or other status to the stub which passes it back to the program that made the request.

Execution of RPC

End of Chapter 3