Download presentation
Presentation is loading. Please wait.
1
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002
2
OS Fall ’ 02 Performance evaluation There are several approaches for implementing the same OS functionality Different scheduling algorithms Different memory management schemes Performance evaluation deals with the question how to compare wellness of different approaches Metrics, methods for evaluating metrics
3
OS Fall ’ 02 Performance Metrics Is something wrong with the following statement: The complexity of my OS is O(n)? This statement is inherently flawed The reason: OS is a reactive program What is the performance metric for the sorting algorithms?
4
OS Fall ’ 02 Performance metrics Response time Throughput Utilization Other metrics: Mean Time Between Failures (MTBF) Supportable load
5
OS Fall ’ 02 Response time The time interval between a user ’ s request and the system response Response time, reaction time, turnaround time, etc. Wellness criterion: Being fast is good: For the user: waiting less For the system: free to do other things
6
OS Fall ’ 02 Throughput Number of jobs done per time unit Applications being run, files transferred, etc. Throughput and response time are interdependent Good response time usually comes on expense of reducing throughput
7
OS Fall ’ 02 Throughput vs. Response Time 3 jobs with times: T 1, T 2 =2*T 1, T 3 =3*T 1 What is wrong with this picture?
8
OS Fall ’ 02 Throughput vs. Response Time The correct picture: Context switch
9
OS Fall ’ 02 Utilization Percentage of time the system is busy doing servicing clients Important for expensive shared system Less important (if at all) for single user systems, for real time systems Utilization and response time are interrelated At very high utilization, response time grows exponentially
10
OS Fall ’ 02 Performance evaluation methods Mathematical analysis Based on a rigorous mathematical model Simulation Simulate the system operation (usually only small parts thereof) Measurement Implement the system in full and measure its performance directly
11
OS Fall ’ 02 Analysis: Pros and Cons + Provides the best insight into the effects of different parameters and their interaction Is it better to configure the system with one fast disk or with two slow disks? + Can be done before the system is built and takes a short time - Rarely accurate Depends on host of simplifying assumptions
12
OS Fall ’ 02 Simulation: Pros and Cons + Flexibility: full control of Simulation model, parameters, Level of detail Disk: average seek time vs. acceleration and stabilization of the head + Can be done before the system is built - Simulation of a full system is infeasible - Simulation of the system parts does not take everything into account
13
OS Fall ’ 02 Measurements: Pros and Cons + The most convincing - Effects of varying parameter values cannot (if at all) be easily isolated Often confused with random changes in the environment - High cost: Implement the system in full, buy hardware
14
OS Fall ’ 02 The bottom line Simulation is the most widely used technique Combination of techniques Never trust the results produced by the single method Validate with another one E.g., simulation + analysis, simulation + measurements, etc.
15
OS Fall ’ 02 Workload Workload is the sequence of things to do Sequence of jobs submitted to the system Arrival time, resources needed File system: Sequence of I/O operations Number of bytes to access Workload is the input of the reactive system The system performance depends on the workload
16
OS Fall ’ 02 Workload analysis Workload modeling Use past measurements to create a model E.g., fit them into a distribution Analysis, simulation, measurement Recorded workload Use past workload directly to drive evaluation Simulation, measurement
17
OS Fall ’ 02 Statistical characterization Every workload item is sampled at random from a distribution Workload is characterized by the distribution E.g., take all possible job times and fit their to a distribution Typically, a lot of low values and a few high values There might be enough high values to make a difference
18
OS Fall ’ 02 Exponential (Poisson) Distribution Memoryless: Regardless of how long you have waited, you can expect to wait for an additional a seconds
19
OS Fall ’ 02 Fat-tailed distribution The real life workloads frequently do not fit the exponential distribution Fat-tailed distributions:
20
OS Fall ’ 02 Pareto Distribution Mean is unbounded The more you wait, the more additional time you should expect to wait
21
OS Fall ’ 02 Exponential vs. Pareto The mean additional time to wait is determined by the shape of the tail The fatter tail, the more additional time to wait For exp.: the tail shape is the same regardless of how much we have waited already=> The mean additional time stays the same For Pareto: The more we wait, the fatter tail becomes The more we wait, the more additional time we will wait
22
OS Fall ’ 02 Exp. vs. Pareto: Focus on tail
23
OS Fall ’ 02 Queuing Systems Computing system can be viewed as a network of queues and servers CPU Disk A Disk B queue new jobs finished jobs
24
OS Fall ’ 02 The role of randomness Arrival (departure) are random processes Deviations from the average are possible The deviation probabilities depend on the inter-arrival time distribution Randomness makes you wait in queue Each job takes exactly 100ms to complete If jobs arrive each 100ms exactly, utilization is 100% But what if both these values are on average?
25
OS Fall ’ 02 Queuing analysis arriving jobs queue server departing jobs
26
OS Fall ’ 02 Little ’ s Law
27
OS Fall ’ 02 How response time depends on utilization? Write the average number of jobs as a function of arrival and service rates Queuing analysis Substitute it to the Little ’ s law
28
OS Fall ’ 02 M/M/1 queue analysis 0132
29
OS Fall ’ 02
33
Response time (utilization)
34
OS Fall ’ 02 Summary What are the three main performance evaluation metrics? What are the three main performance evaluation techniques? What is the most important thing for performance evaluation? Which workload models do you know? What does make you to wait in queue? How response time depends on utilization?
35
OS Fall ’ 02 To read more Notes Stallings, Appendix A Raj Jain, The Art of Computer Performance Analysis
36
OS Fall ’ 02 Next: Processes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.