Download presentation
Presentation is loading. Please wait.
1
OS Fall ’ 02 Processes Operating Systems Fall 2002
2
OS Fall ’ 02 What is a process? An instance of an application execution Process is the most basic abstractions provided by OS An isolated computation context for each application Computation context CPU state + address space + environment
3
OS Fall ’ 02 CPU state=Register contents Process Status Word (PSW) exec. mode, last op. outcome, interrupt level Instruction Register (IR) Current instruction being executed Program counter (PC) Stack pointer (SP) General purpose registers
4
OS Fall ’ 02 Address space Text Program code Data Predefined data (known in compile time) Heap Dynamically allocated data Stack Supporting function calls
5
OS Fall ’ 02 Environment External entities Terminal Open files Communication channels Local With other machines
6
OS Fall ’ 02 Process control block (PCB) state memory files accounting priority user CPU registers storage text data heap stack PSW IR PC SP general purpose registers PCB CPU kernel user
7
OS Fall ’ 02 Process States running ready blocked created schedule preempt event done wait for event terminated
8
OS Fall ’ 02 UNIX Process States running user running kernel ready user ready kernel blocked zombie sys. call interrupt schedule created return terminated wait for event event done schedule preempt interrupt
9
OS Fall ’ 02 Threads Thread: an execution within a process A multithreaded process consists of many co-existing executions Separate: CPU state, stack Shared: Everything else Text, data, heap, environment
10
OS Fall ’ 02 Thread Support Operating system Advantage: thread scheduling done by OS Better CPU utilization Disadvantage: overhead if many threads User-level Advantage: low overhead Disadvantage: not known to OS E.g., a thread blocked on I/O blocks all the other threads within the same process
11
OS Fall ’ 02 Multiprogramming Multiprogramming: having multiple jobs (processes) in the system Interleaved (time sliced) on a single CPU Concurrently executed on multiple CPUs Both of the above Why multiprogramming? Responsiveness, utilization, concurrency Why not? Overhead, complexity
12
OS Fall ’ 02 Responsiveness Job 1 arrives Job 1 terminates Job1 Job2 Job3 Job 2 terminates Job 3 terminates Job 2 arrives Job 3 arrives Job1 Job3 Job2 Job 1 terminates Job 3 terminates Job 2 terminates
13
OS Fall ’ 02 Workload matters! ? Would CPU sharing improve responsiveness if all jobs were taken the same time? No. It makes it worse! For a given workload, the answer depends on the value of coefficient of variation (CV) of the distribution of job runtimes CV=stand. dev. / mean CV CPU sharing does not help CV > 1 => CPU sharing does help
14
OS Fall ’ 02 Real workloads Exp. Dist: CV=1;Heavy Tailed Dist: CV>1 Dist. of job runtimes in real systems is heavy tailed CV ranges from 3 to 70 Conclusion: CPU sharing does improve responsiveness CPU sharing is approximated by Time slicing: interleaved execution
15
OS Fall ’ 02 Utilization idle 1 st I/O operation I/O ends 2 nd I/O operation I/O ends 3 rd I/O operation CPU Disk CPU Disk idle Job1 Job2
16
OS Fall ’ 02 Workload matters! ? Does it really matter? Yes, of course: If all jobs are CPU bound (I/O bound), multiprogramming does not help to improve utilization A suitable job mix is created by a long- term scheduling Jobs are classified on-line to be CPU (I/O) bound according to the job ’ s history
17
OS Fall ’ 02 Concurrency Concurrent programming Several process interact to work on the same problem ls – l | more Simultaneous execution of related applications Word + Excel + PowerPoint Background execution Polling/receiving Email while working on smth else
18
OS Fall ’ 02 The cost of multiprogramming Switching overhead Saving/restoring context wastes CPU cycles Degrades performance Resource contention Cache misses Complexity Synchronization, concurrency control, deadlock avoidance/prevention
19
OS Fall ’ 02 Short-Term Scheduling running ready blocked created schedule preempt event done wait for event terminated
20
OS Fall ’ 02 Short-Term scheduling Process execution pattern consists of alternating CPU cycle and I/O wait CPU burst – I/O burst – CPU burst – I/O burst... Processes ready for execution are hold in a ready (run) queue STS is schedules process from the ready queue once CPU becomes idle
21
OS Fall ’ 02 Metrics: Response time Job arrives/ becomes ready to run Starts running Job terminates/ blocks waiting for I/O T wait T run T resp T resp = T wait + T run Response time (turnaround time) is the average over the jobs ’ T resp
22
OS Fall ’ 02 Other Metrics Wait time: average of T wait This parameter is under the system control Response ratio or slowdown slowdown=T resp / T run Throughput, utilization depend on workload=> Less useful
23
OS Fall ’ 02 Running time (T run ) Length of the CPU burst When a process requests I/O it is still “ running ” in the system But it is not a part of the STS workload STS view: I/O bound processes are short processes Although text editor session may last hours!
24
OS Fall ’ 02 Off-line vs. On-line scheduling Off-line algorithms Get all the information about all the jobs to schedule as their input Outputs the scheduling sequence Preemption is never needed On-line algorithms Jobs arrive at unpredictable times Very little info is available in advance Preemption compensates for lack of knowledge
25
OS Fall ’ 02 First-Come-First-Serve (FCFS) Schedules the jobs in the order in which they arrive Off-line FCFS schedules in the order the jobs appear in the input Runs each job to completion Both on-line and off-line Simple, a base case for analysis Poor response time
26
OS Fall ’ 02 Shortest Job First (SJF) Best response time Short Long job Short Inherently off-line All the jobs and their run-times must be available in advance
27
OS Fall ’ 02 Preemption Preemption is the action of stopping a running job and scheduling another in its place Context switch: Switching from one job to another
28
OS Fall ’ 02 Using preemption On-line short-term scheduling algorithms Adapting to changing conditions e.g., new jobs arrive Compensating for lack of knowledge e.g., job run-time Periodic preemption keeps system in control Improves fairness Gives I/O bound processes chance to run
29
OS Fall ’ 02 Shortest Remaining Time first (SRT) Jobs run-times are known Jobs arrival time are not known When a new job arrives: if its run-time is shorter than the remaining time time of the currently executing process: preempt the currently executing process and schedule the newly arrived job else, continue the current job and insert the new job into the sorted queue
30
OS Fall ’ 02 Round Robin (RR) Both job arrival times and job run-times are not known Run each job cyclically for a short time quantum Approximates CPU sharing Job 1 arrives Job13 Job 2 arrives Job 3 arrives 2 1 2 31 2 31 2 1 Job2 Job 3 terminates Job 1 terminates Job 2 terminates
31
OS Fall ’ 02 Responsiveness Job 1 arrives Job 1 terminates Job1 Job2 Job3 Job 2 terminates Job 3 terminates Job 2 arrives Job 3 arrives Job1 Job3 Job2 Job 1 terminates Job 3 terminates Job 2 terminates
32
OS Fall ’ 02 Priority Scheduling RR is oblivious to the process past I/O bound processes are treated equally with the CPU bound processes Solution: prioritize processes according to their past CPU usage T n is the duration of the n-th CPU burst E n+1 is the estimate of the next CPU burst
33
OS Fall ’ 02 Multilevel feedback queues quantum=10 quantum=20 quantum=40 FCFS new jobs terminated
34
OS Fall ’ 02 Multilevel feedback queues Priorities are implicit in this scheme Very flexible Starvation is possible Short jobs keep arriving => long jobs get starved Solutions: Let it be Aging
35
OS Fall ’ 02 Priority scheduling in UNIX Multilevel feedback queues The same quantum at each queue A queue per priority Priority is based on past CPU usage pri=cpu_use+base+nice cpu_use is dynamically adjusted Incremented each clock interrupt: 100 sec -1 Halved for all processes: 1 sec -1
36
OS Fall ’ 02 Fair Share scheduling Achieving pre-defined goals Administrative considerations Paying for machine usage, importance of the project, personal importance, etc. Quality-of-service, soft real-time Video, audio
37
OS Fall ’ 02 Fair Share scheduling: algorithms Term in the priority formula: Reflects a cumulative CPU usage + aging when not used Lottery scheduling: Each process gets a number of lottery tickets proportional to its CPU allocation The scheduler picks a ticket at random and gives it to the winning client
38
OS Fall ’ 02 Fair Share scheduling: VTRR Virtual Time Round Robin (VTRR) Order ready queue in the order of decreasing shares (the highest share at the head) Run Round Robin as usual Once a process that exhausted its share is encountered: Go back to the head of the queue
39
OS Fall ’ 02 Multiprocessor Scheduling Homogeneous vs. heterogeneous Homogeneity allows for load sharing Separate ready queue for each processor or common ready queue? Scheduling Symmetric Master/slave
40
OS Fall ’ 02 A Bank or a Supermarket? departing jobs departing jobs departing jobs departing jobs arriving jobs shared queue CPU1 CPU2 CPU3 CPU4 CPU1 CPU2 CPU3 CPU4 arriving jobs M/M/44 x M/M/1
41
OS Fall ’ 02 It is a Bank! M/M/4 4 x M/M/1
42
OS Fall ’ 02 Next: Concurrency
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.