Download presentation
Presentation is loading. Please wait.
Published byDominic Dale Young Modified over 9 years ago
1
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
2
A system is a part of the real world that we wish to analyze Made up of autonomous entities interacting with one another A model is an abstract representation of a system We use models in analysis and design Only relevant properties are included
3
Simulate a system’s behavior using a model Models have tunable input parameters Observe the simulation; analyze output Predict behavior of the real system by analyzing behavior of the model Behavior of the model depends on time, input parameters, and events generated within the environment
4
We can model numerous systems using the following components: User: a single user of the system Service: a node providing a service Queue: an array of users waiting for service
5
Mathematical models can be categorized as: Deterministic: ▪ Behavior is predictable with 100% certainty Stochastic: ▪ Behavior is uncertain, based on random events
6
A stochastic model is one that incorporates uncertainty into its behavior One or more attributes change their values according to a probability distribution
7
A probability distribution specifies the probability of each value of some random variable Uniform distribution: All values are equally probable
8
A probability distribution specifies the probability of each value of some random variable Normal distribution: Values are more probable at or near the mean This will form a bell curve
9
A probability distribution specifies the probability of each value of some random variable Exponential distribution: Values are times between events in a Poisson process in which events occur both continuously and independently, but at a constant average rate
10
To study the performance of an operating system, we can: Take measurements on the real system ▪ e.g. Unix time command Run a simulation model Apply an analytical model ▪ e.g. Queuing theory, exponential distributions, etc.
11
External performance goals: Minimize user response time Maximize throughput ▪ Number of jobs completed per unit time Minimize turnaround time ▪ Average time to complete jobs Maximize fairness Maximize degree of multiprogramming ▪ Number of processes supported without degradation
12
Internal performance goals: Maximize CPU utilization Maximize disk utilization Minimize disk access time Enforce priorities Minimize overhead ▪ e.g. time for scheduling algorithm, context switching Avoid starvation of long-running jobs Enforce real-time deadlines (sometimes)
13
Some key performance measures: Average number of jobs in the system Average number of jobs waiting in a queue Average time a job spends in the system Average time a job spends in the queues CPU utilization Total number of jobs serviced (i.e. throughput)
14
Processes are created by the operating system Processes initially added to a job queue, which contains all processes waiting to enter the system From the job queue, processes that are ready for execution are added to the ready queue
15
A long-term scheduler (i.e. job scheduler) selects processes from the job queue, adding those processes to the ready queue A short-term scheduler (i.e. CPU scheduler) selects processes from the ready queue and allocates time with the CPU
16
The long-term scheduler is invoked infrequently
17
The degree of multiprogramming of an operating system is defined as the number of processes in memory In a stable operating system, the average process arrival rate equals the average process departure rate
18
The short-term scheduler decides which process the CPU executes next The dispatcher gives control of the CPU to the process selected by the CPU scheduler: Performs context switch Switches to user mode Jumps to the proper location in the user program to resume program execution
19
the dispatcher operates here
20
Processes alternate between CPU execution and I/O wait A CPU burst is actual program execution that uses the CPU An I/O burst is a blocked state Each process starts and ends with a CPU burst
21
Histogram of CPU burst time frequencies
22
CPU scheduling requires an algorithm to determine which process to dispatch next Scheduling algorithms include: First-Come, First-Served (FCFS) Shortest-Job-First (SJF) Round-Robin (RR) Priority Multilevel Queue (MQ)
23
Preemptive scheduling preempts a running process before its time slice expires Or it preempts a process because its time slice has expired Non-preemptive scheduling gives a process exclusive uninterrupted access to the CPU for the entirety of its execution process
24
Compare scheduling algorithms by measuring CPU utilization – keep CPU as busy as possible Throughput – maximize the number of processes that complete their execution per unit time Turnaround time – minimize the elapsed time to fully execute a particular process Waiting time – minimize the elapsed time a process waits in the ready queue
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.