Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Processes Management 1 Processes 2 Threads 3 IPC problems 4 Processes Scheduling 5 Deadlock.

Similar presentations


Presentation on theme: "1 Processes Management 1 Processes 2 Threads 3 IPC problems 4 Processes Scheduling 5 Deadlock."— Presentation transcript:

1 1 Processes Management 1 Processes 2 Threads 3 IPC problems 4 Processes Scheduling 5 Deadlock

2 2 Processes process Multi-programming system: – Some programs were loaded in main memory, the CPU switch from program to program. – Pseudo-parallelism to contrast with the true hardware parallelism of multiprocessor systems – Multi-processor: true hardware parallelism ---------------------- ----------- ---------------------- ----------------- ---------------------- ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ C Source Program C Source Program User A User B

3 3 The Process Model process model –(a)Multiprogramming of four programs –(b)Conceptual model of 4 independent, sequential processes (abstract) –(c)Only one program active at any instant

4 4 Processes and program Process: A program is executed by CPU for the data sets: –A key idea here is that a process is an activity of some kind. It has a program, input, output and a state Distinguish between process and program –Dynamic and Static (most important) –Temporarily and Always Program CPU Data Modern Operating Systems teacher Students in Class A Students in Class B Process

5 5 Process Creation process creation Principal events that cause process creation: –System initialization –Execution of a process creation system –User request to create a new process –Initiation of a batch job (mainframe) Technically, in all these cases, a new process is created by having a existed process execute a process creation system call : –UNIX:fork,/WIN32: Create Process

6 6 Process Termination process termination Conditions which terminate processes: 1.Normal exit (voluntary): they have done their work 2.Error exit (voluntary): for example, no file exists 3.Fatal error (involuntary): for example, dividing by zero 4.Killed by another process (involuntary): executes a system call telling OS to do this

7 7 Process States process states [ ※※※※※ ] Possible process states [ ※※※※※ ] –running (actually using the CPU at that instant) –blocked (unable to run until some external event happens) –ready ( run-able; temporarily stopped to let another process run)

8 8 scheduler About scheduler:  Lowest layer of process-structured of the operating system for handling: –Scheduling –Interrupts  Above that layer are sequential processes  All the interrupt handling and details of actually starting and stopping processes are hidden away in what is here called the scheduler

9 9 Implementation of processes To implement the process model, the operating system maintains a table (an array of structures), called the process table, with one entry per process. (some other authors call these entries process control blocks(PCB)) [ ※※※※ ] Interrupt vector –Associated with each I/O device class is a location (often near the bottom of memory) called interrupt vector –It contains the address of the interrupt service procedure.

10 10 Process table entry Some of the fields of a typical process table entry

11 11 Skeleton of what lowest level of OS does when an interrupt occurs Skeleton of what lowest level of OS does when an interrupt occurs

12 12 Threads The thread model The other concept a process has is a thread of execution, usually shortened to just thread Processes are used to group resources together , Threads are the entries scheduled for execution on the CPU [ ※※※※※ ] –What threads add to the process model is to allow multiple executions to take place in the same process environment, to a large degree independent of one another –The threads share an address space, open files, and other resources.

13 13 Threads About threads: The term multithreading is also used to describe the situation of allowing multiple threads in the same process When a multithreaded process is run on a single- CPU system, the threads take turns running Different threads in a process are not quite as independent as different process A thread can be in any one of several states: running, blocked, ready or terminated

14 14 Per process items shared by all threads in a process Multithreads would be used when two or more threads are actually part of the same job and are actively and closely cooperating with each other. Per process items shared by all threads in a process: –Address space –Global variables –Open files –Child processes –Pending alarms –Signals and signal handlers –Accounting information –Data structure

15 15 Threads Per thread items private to each thread: –Program counter –Registers –Stack –State It is important to realize that each thread has its own stack. Each thread will generally call different procedures and have thus a different execution history, this is why thread needs its own stack The transitions between thread states are the same as the transitions between process states Library procedure about thread: –Tread_create, thread_exit, thread_yield( give up CPU)

16 16 Threads The reason for having threads: –The main reason for having threads is that in many applications, multiple activities are going on at once –Second, they do not have any resources attached to them, they are easier to create and destroy than processes –Third, they can improve performance of the system –Finally, threads are useful on systems with multiple CPUs, where real parallelism is possible

17 17 Examples of using threads For example, a multithreaded web server: –Dispatcher thread: dispatch the jobs –Worker thread: read do the jobs –Cache: web servers use this fact to improve performance by maintaining a collection of heavily used pages in main memory to eliminate the need to go to disk to get them. Such a collection is called a cache and is used in many other contexts as well For example, a word process program

18 18 Implement threads in user space There are two main ways to implement a threads package: –In user space :the kernel knows nothing about them, as far as the kernel is concerned, it is managing ordinary single- threaded processes –In the kernel:the kernel know about and manage the threads –When threads are managed in user space, each process needs its own private thread table to keep track of the threads in that process –There is one thread table in the kernel when a threads package managed by the kernel

19 19 Advantage of user-level thread Advantage of user-level thread: –The first, and most obvious, advantage is that a user- level threads package can be implemented on an OS that does not support threads –Thread switching is not in the kernel, so thread scheduling very fast –Allow each process to have its own customized scheduling algorithm –Scale better, since kernel threads invariably require some table space and stack space in the kernel, which can be a problem if there are a very large number of threads

20 20 Major problem of user-level thread Major problem of user-level thread: –How blocking system calls are implemented. If a thread blocks, the process this thread related will be block –If a thread starts running, no other thread in that process will ever run unless the first thread gives up the CPU. so in a system with multi-CPUs, multi-thread is not useful –Programmer generally want threads precisely in applications where the threads block often: if a thread blocks, it is difficult to switching to another user-level thread

21 21 Implementing threads in the kernel Advantage: –Can use the advantage of multi-processors –When a thread blocks, the kernel can run another thread from the same process –The kernel it self can use multithread to implement Disadvantage: –All calls that might block a thread are implemented as system calls,cost is great –The thread switching in the same process need to trap to the kernel, much more overhead will be incurred

22 22 Inter-process communication Processes frequently need to communicate with each other Three are three issues here: –The first is how one process can pass information to another –The second has to do with making sure two or more process do not get into each other’s way when engaging critical activities –The third concerns proper sequencing when dependencies are present

23 23 Race condition Two processes want to access shared memory at same time

24 24 Critical Regions and Race Conditions race conditions –Race conditions : situations where two or more processes are reading or writing some shared data and the final result depends on who runs precisely when [ ※※※※※ ] –Spooler directory –Printer daemon –Mutual exclusion [ ※※※ ] –Critical region: is a part of the program where the memory is accessed.It also call critical section. [ ※※※※※ ]

25 25 Four conditions to provide mutual exclusion critical regions Four conditions to provide mutual exclusion: –No two processes simultaneously in critical region –No assumptions made about speeds or numbers of CPUs –No process running outside its critical region may block another process –No process must wait forever to enter its critical region

26 26 Mutual exclusion using critical regions

27 27 Semaphores A semaphore could have the value 0, indicating that no wakeups were saved, or some positive value if one or more wakeups were pending A semaphore S can be accessed only through two standard atomic operations: down(P) and up (V) –A semaphore may be initialized to a non-negative value. –The P operation decrements the semaphore value. If the value becomes negative, then the process executing the P is blocked –The V operation increments the semaphore values. If the value is not positive, then a process blocked by a P operation is unblocked. –Down and up are all atomic actions

28 28 Semaphores If the value of semaphore can be negative: [ ※※※ ] down(s): – { S--; if (s<0) { place this process in s.queue; block this process; } up(s): – { s++; if (s<=0) { remove a process from s.queue; place the process on ready list; }

29 29 Semaphores Using in critical section semaphore mutex=1; void example(void) { while(TRUE) { down(mutex); critical section(); up(mutex); non-critical section(); } Semaphore’s meaning: If s is a semaphore and can be negative: When s>0, s is number of available resources. When s<=0, |s| is number of the processes that is waiting for s in the waiting queue.

30 30 The producer-consumer problem using semaphores The producer-consumer problem using semaphores[ ※※※※※ ]

31 31 The readers/writers problem There is a data area shared among a number of processes: –And number of readers may simultaneously read the data. –Only one writer at a time may write to the data. –If a writer is writing to the data, no reader may read it. Which models access to a database. In this solution, the first reader to get access to the database does a down on the semaphore db. Subsequent reader merely increment a counter, rc. As the reader leave, they decrement the counter and the last one out does on the semaphore, allowing a blocked writer, if there is one, to get in

32 32 The Readers and Writers Problem A solution to the readers and writers problem [ ※※※※※ ]

33 33 Scheduling When a computer is multi-programmed, it frequently has multiple processes competing for the CPU at the same time: –The part of operating system that makes the choice is called the scheduler and –the algorithm it uses is called the scheduling algorithm introduce to scheduling –In addition to picking the right process to run –the scheduler also has worry about making efficient use of the CPU because process switching is expensive Process behavior: [ ※※※ ] –I/O-bound process: short CPU burst –CPU-bound process: long CPU burst

34 34 When to schedule when to make scheduling decision: [ ※※※※ ] First, when a new process is created, a decision needs to be made whether to run the parent process or the child process –Second, a scheduling decision must be made when a process exits –Third, when a process blocks on I/O, on a semaphore, or for some other reason, another process has to be selected to run. –Fourth, when an I/O interrupt occurs, a scheduling decision may be made.

35 35 Categories of scheduling algorithms Two kinds or scheduling ways: [ ※※※※ ] –A non-preemptive scheduling algorithm picks a process to run and then just lets it run until it blocks or until it voluntarily releases the CPU –A preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time Categories of scheduling algorithms: –Batch –Interactive –Real time

36 36 Scheduling algorithm goals All systems: –Fairness-giving each process a fair share of the CPU –Policy enforcement-seeing that stated policy is carried out –Balance-keeping all parts of the system busy Batch systems:[ ※※※※※ ] –Throughput-maximize jobs per hour –Turnaround time-minimize time between sub-mission and termination –CPU utilization-keep the CPU busy all the time Interactive systems: [ ※※※※ ] –Response time-respond to requests quickly –Proportionally-meet users’ expectations Real-time systems: –Meeting deadlines-avoid losing data –Predictability-avoid quality degradation in multimedia systems

37 37 Scheduling in batch systems Scheduling in batch systems [ ※※※※※ ] First-come first-served [ ※※※※ ] –With this algorithm, processes are assigned the CPU in the order they request it Shortest job first [ ※※※※※ ] –When several equally important jobs are sitting in the input queue waiting to be started, the scheduler picks the shortest job first  Here we find four jobs A,B,C, and D, with run times of 8,4,4,and 4 minutes. By running them in that order, the turnaround for A is 8 minutes, for B is 12 minutes, for C is 16 minutes and for D is 20 minutes, for an average of 14 ((8+12+16+20)/4) minutes

38 38 Scheduling in batch systems  Now let us consider running these four jobs using shortest job first, the turnaround are now 4,8,12,20 minutes, and for an average of 11 minutes Shortest remaining time next [ ※※※※ ] The scheduler always choose the process whose remaining run time is the shortest A(8)B(4)C(4)D(4) A(8)B(4)C(4)D(4)

39 39 Scheduling in interactive systems Scheduling in interactive systems [ ※※※※ ※ ] –Round-robin scheduling: Each process is assigned a time interval, called its quantum, which it is allowed to run [ ※※※※※ ] –Priority scheduling: Each process is assigned a priority, and the run-able process with the highest priority is allowed to run [ ※※※※※ ]  To prevent high-priority processes from running indefinitely, the scheduler may decrease the priority of the currently running process at each time tip Round-robin scheduling B FDGA FDGA B

40 40 Scheduling in real-time systems Some concepts to be remembered: –Hard real time/Soft real time/Periodic/Non-periodic –Schedulable: if there are m periodic events and event i occurs with period Pi and requests Ci seconds of CPU time to handle each event: (C1/P1+C2/P2+…Cm/Pm)<=1

41 41 Scheduling summary Scheduling algorithmproperty First-come first-servedNon-preempt-able Shorted job/process firstNon-preempt-able Shorted remaining job nextPreempt-able Robin-boundPreempt-able Priority schedulingNon-preempt-able Preempt-able Scheduling summary[ ※※※※※ ]


Download ppt "1 Processes Management 1 Processes 2 Threads 3 IPC problems 4 Processes Scheduling 5 Deadlock."

Similar presentations


Ads by Google