Download presentation
Presentation is loading. Please wait.
1
OS Spring ’ 04 Processes Operating Systems Spring 2004
2
OS Spring ’ 04 What is a process? An instance of an application execution Process is the most basic abstractions provided by OS An isolated computation context for each application Computation context CPU state + address space + environment
3
OS Spring ’ 04 CPU state=Register contents Process Status Word (PSW) exec. mode, last op. outcome, interrupt level Instruction Register (IR) Current instruction being executed Program counter (PC) Stack pointer (SP) General purpose registers
4
OS Spring ’ 04 Address space Text Program code Data Predefined data (known in compile time) Heap Dynamically allocated data Stack Supporting function calls
5
OS Spring ’ 04 Environment External entities Terminal Open files Communication channels Local With other machines
6
OS Spring ’ 04 Process control block (PCB) state memory files accounting priority user CPU registers storage text data heap stack PSW IR PC SP general purpose registers PCB CPU kernel user
7
OS Spring ’ 04 Process States running ready blocked created schedule preempt event done wait for event terminated
8
OS Spring ’ 04 UNIX Process States running user running kernel ready user ready kernel blocked zombie sys. call interrupt schedule created return terminated wait for event event done schedule preempt interrupt
9
OS Spring ’ 04 Multiprocesses mechanisms Context switch Create process and dispatch (in Unix, fork()/exec()) End process (in Unix, exit() and wait())
10
OS Spring ’ 04 Threads Thread: an execution within a process A multithreaded process consists of many co-existing executions Separate: CPU state, stack Shared: Everything else Text, data, heap, environment
11
OS Spring ’ 04 Thread Support Operating system Advantage: thread scheduling done by OS Better CPU utilization Disadvantage: overhead if many threads User-level Advantage: low overhead Disadvantage: not known to OS E.g., a thread blocked on I/O blocks all the other threads within the same process
12
OS Spring ’ 04 Multiprogramming Multiprogramming: having multiple jobs (processes) in the system Interleaved (time sliced) on a single CPU Concurrently executed on multiple CPUs Both of the above Why multiprogramming? Responsiveness, utilization, concurrency Why not? Overhead, complexity
13
OS Spring ’ 04 Responsiveness Job 1 arrives Job 1 terminates Job1 Job2 Job3 Job 2 terminates Job 3 terminates Job 2 arrives Job 3 arrives Job1 Job3 Job2 Job 1 terminates Job 3 terminates Job 2 terminates
14
OS Spring ’ 04 Workload matters! ? Would CPU sharing improve responsiveness if all jobs were taking the same time? No. It makes it worse! For a given workload, the answer depends on the value of coefficient of variation (CV) of the distribution of job runtimes CV=stand. dev. / mean CV CPU sharing does not help CV > 1 => CPU sharing does help
15
OS Spring ’ 04 Real workloads Exp. Dist: CV=1;Heavy Tailed Dist: CV>1 Dist. of job runtimes in real systems is heavy tailed CV ranges from 3 to 70 Conclusion: CPU sharing does improve responsiveness CPU sharing is approximated by Time slicing: interleaved execution
16
OS Spring ’ 04 Utilization idle 1 st I/O operation I/O ends 2 nd I/O operation I/O ends 3 rd I/O operation CPU Disk CPU Disk idle Job1 Job2
17
OS Spring ’ 04 Workload matters! ? Does it really matter? Yes, of course: If all jobs are CPU bound (I/O bound), multiprogramming does not help to improve utilization A suitable job mix is created by a long- term scheduling Jobs are classified on-line to be CPU (I/O) bound according to the job ’ s history
18
OS Spring ’ 04 Concurrency Concurrent programming Several process interact to work on the same problem ls – l | more Simultaneous execution of related applications Word + Excel + PowerPoint Background execution Polling/receiving Email while working on smth else
19
OS Spring ’ 04 The cost of multiprogramming Switching overhead Saving/restoring context wastes CPU cycles Degrades performance Resource contention Cache misses Complexity Synchronization, concurrency control, deadlock avoidance/prevention
20
OS Spring ’ 04 Short-Term Scheduling running ready blocked created schedule preempt event done wait for event terminated
21
OS Spring ’ 04 Short-Term scheduling Process execution pattern consists of alternating CPU cycle and I/O wait CPU burst – I/O burst – CPU burst – I/O burst... Processes ready for execution are hold in a ready (run) queue STS schedules process from the ready queue once CPU becomes idle
22
OS Spring ’ 04 Metrics: Response time Job arrives/ becomes ready to run Starts running Job terminates/ blocks waiting for I/O T wait T run T resp T resp = T wait + T run Response time (turnaround time) is the average over the jobs ’ T resp
23
OS Spring ’ 04 Other Metrics Wait time: average of T wait This parameter is under the system control Response ratio or slowdown slowdown=T resp / T run Throughput, utilization depend on user imposed workload=> Less useful
24
OS Spring ’ 04 Note about running time (T run ) Length of the CPU burst When a process requests I/O it is still “ running ” in the system But it is not a part of the STS workload STS view: I/O bound processes are short processes Although text editor session may last hours!
25
OS Spring ’ 04 Off-line vs. On-line scheduling Off-line algorithms Get all the information about all the jobs to schedule as their input Outputs the scheduling sequence Preemption is never needed On-line algorithms Jobs arrive at unpredictable times Very little info is available in advance Preemption compensates for lack of knowledge
26
OS Spring ’ 04 First-Come-First-Serve (FCFS) Schedules the jobs in the order in which they arrive Off-line FCFS schedules in the order the jobs appear in the input Runs each job to completion Both on-line and off-line Simple, a base case for analysis Poor response time
27
OS Spring ’ 04 Shortest Job First (SJF) Best response time Short Long job Short Inherently off-line All the jobs and their run-times must be available in advance
28
OS Spring ’ 04 Preemption Preemption is the action of stopping a running job and scheduling another in its place Context switch: Switching from one job to another
29
OS Spring ’ 04 Using preemption On-line short-term scheduling algorithms Adapting to changing conditions e.g., new jobs arrive Compensating for lack of knowledge e.g., job run-time Periodic preemption keeps system in control Improves fairness Gives I/O bound processes chance to run
30
OS Spring ’ 04 Shortest Remaining Time first (SRT) Job run-times are known Job arrival times are not known When a new job arrives: if its run-time is shorter than the remaining time of the currently executing job: preempt the currently executing job and schedule the newly arrived job else, continue the current job and insert the new job into a sorted queue When a job terminates, select the job at the queue head for execution
31
OS Spring ’ 04 Round Robin (RR) Both job arrival times and job run-times are not known Run each job cyclically for a short time quantum Approximates CPU sharing Job 1 arrives Job13 Job 2 arrives Job 3 arrives 2 1 2 31 2 31 2 1 Job2 Job 3 terminates Job 1 terminates Job 2 terminates
32
OS Spring ’ 04 Responsiveness Job 1 arrives Job 1 terminates Job1 Job2 Job3 Job 2 terminates Job 3 terminates Job 2 arrives Job 3 arrives Job1 Job3 Job2 Job 1 terminates Job 3 terminates Job 2 terminates
33
OS Spring ’ 04 Priority Scheduling RR is oblivious to the process past I/O bound processes are treated equally with the CPU bound processes Solution: prioritize processes according to their past CPU usage T n is the duration of the n-th CPU burst E n+1 is the estimate of the next CPU burst
34
OS Spring ’ 04 Multilevel feedback queues quantum=10 quantum=20 quantum=40 FCFS new jobs terminated
35
OS Spring ’ 04 Multilevel feedback queues Priorities are implicit in this scheme Very flexible Starvation is possible Short jobs keep arriving => long jobs get starved Solutions: Let it be Aging
36
OS Spring ’ 04 Priority scheduling in UNIX Multilevel feedback queues The same quantum at each queue A queue per priority Priority is based on past CPU usage pri=cpu_use+base+nice cpu_use is dynamically adjusted Incremented each clock interrupt: 100 sec -1 Halved for all processes: 1 sec -1
37
OS Spring ’ 04 Fair Share scheduling Given a set of processes with associated weights, a fair share scheduler should allocate CPU to each process in proportion to its respective weight Achieving pre-defined goals Administrative considerations Paying for machine usage, importance of the project, personal importance, etc. Quality-of-service, soft real-time Video, audio
38
OS Spring ’ 04 Perfect Fairness A fair share scheduling algorithm achieves perfect fairness in time interval (t 1,t 2 ) if
39
OS Spring ’ 04 Wellness criterion for FSS An ideal fair share scheduling algorithm achieves perfect fairness for all time intervals The goal of a FSS algorithm is to receive CPU allocations as close as possible to a perfect FSS algorithm
40
OS Spring ’ 04 Fair Share scheduling: algorithms Weighted Round Robin Shares are not uniformly spread in time Lottery scheduling: Each process gets a number of lottery tickets proportional to its CPU allocation The scheduler picks a ticket at random and gives it to the winning client Only statistically fair, high complexity
41
OS Spring ’ 04 Fair Share scheduling: VTRR Virtual Time Round Robin (VTRR) Order ready queue in the order of decreasing shares (the highest share at the head) Run Round Robin as usual Once a process that exhausted its share is encountered: Go back to the head of the queue
42
OS Spring ’ 04 Multiprocessor Scheduling Homogeneous vs. heterogeneous Homogeneity allows for load sharing Separate ready queue for each processor or common ready queue? Scheduling Symmetric Master/slave
43
OS Spring ’ 04 A Bank or a Supermarket? departing jobs departing jobs departing jobs departing jobs arriving jobs shared queue CPU1 CPU2 CPU3 CPU4 CPU1 CPU2 CPU3 CPU4 arriving jobs M/M/44 x M/M/1
44
OS Spring ’ 04 It is a Bank! M/M/4 4 x M/M/1
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.