Download presentation
Presentation is loading. Please wait.
1
1 An Engineering Approach to Performance Define Performance Metrics Measure Performance Analyze Results Develop Cost x Performance Alternatives: –prototype –modeling Assess Alternatives Implement Best Alternative
2
2 Business in the Internet Age (Business Week, June 22, 1998; numbers in $US billion)
3
3 Caution Signs Along the Road (Gross & Sager, Business Week, June 22, 1998, p. 166.) There will be jolts and delays along the way for electronic commerce: congestion is the most obvious challenge.
4
4 Electronic Commerce: online sales are soaring “… IT and electronic commerce can be expected to drive economic growth for many years to come.” The Emerging Digital Economy, US Dept. of Commerce, 1998.
5
5 What are people saying about Web performance… “Tripod’s Web site is our business. If it’s not fast and reliable, there goes our business.”, Don Zereski, Tripod’s vice-president of Technology (Internet World) “Computer shuts down Amazon.com book sales. The site went down at about 10 a.m. and stayed out of service until 10 p.m.” The Seattle Times, 01/08/98
6
6 What are people saying about Web performance… “Sites have been concentrating on the right content. Now, more of them -- specially e-commerce sites -- realize that performance is crucial in attracting and retaining online customers.” Gene Shklar, Keynote, The New York Times, 8/8/98
7
7 What are people saying about Web performance… “Capacity is King.” Mike Krupit, Vice President of Technology, CDnow, 06/01/98 “Being able to manage hit storms on commerce sites requires more than just buying more plumbing.” Harry Fenik, vice president of technology, Zona Research, LANTimes, 6/22/98
8
8 Introduction
9
9 Objectives Performance Analysis Performance Analyst = Analysis + Computer System = Mathematician + Computer Systems Person
10
10 You Will learn Specifying performance requirements Evaluating design alternatives Comparing two or more systems Determining the optimal value of a parameter (system tuning) Finding the performance bottleneck (bottleneck identification)
11
11 You Will learn (cont’d) Characterizing the load on the system (workload characterization) Determining the number and size of components (capacity planning) Predicting the performance at future loads (forecasting)
12
12 Performance Analysis Objectives Involve: Procurement Improvement Capacity Planning Design
13
13 Performance Improvement Procedure Analyze Cost Effectiveness START STOP Understand System Implement Modification Formulate Improvement Hypothesis Analyze Operations Test Specific Hypothesis Test Effectiveness of Modification Satisfactory NoneNone Invalid Unsatisfactory
14
14 Basic Terms System: Any collection of hardware, software, and firmware Metrics: The criteria used to evaluate the performance of the system components Workloads: The requests made by the users of the system
15
15 Examples of Performance Indexes External Indexes Turnaround Time Response Time Through Put Capacity Availability Reliability
16
16 Examples of Performance Indexes Internal Indexes CPU Utilization Overlap of Activities Multiprog. Stretch Factor Multiprog. Level Paging Rate Reaction time
17
17 Example I What performance metrics should be used to compare the performance of the following systems. 1. Two disk drivers? 2. Two transaction-processing systems? 3. Two packet-retransmission algorithms?
18
18 Example II Which type of monitor (software or hardware) would be more suitable for measuring each of the following quantities: 1. Number of Instructions executed by a processor? 2. Degree of multiprogramming on a timesharing system? 3. Response time of packets on a network?
19
19 Example III The number of packets lost on two links was measured for four file size as shown below: File SizeLink ALink B 1000510 120073 130030 5001 Which link is better?
20
20 Example IV In order to compare the performance of two cache replacement algorithms: 1. What type of simulation model should be used? 2. How long should the simulation be run? 3. What can be done to get the same accuracy with a shorter run? 4. How can one decide if the random-number generator in the simulation is a good generator?
21
21 Example V The average response time of a database system is three seconds. During a one-minute observation interval, the idle time on a system was ten seconds. Using a queueing model for the system, determine the following:
22
22 Example V (cont’d) 1. System utilization 2. Average service time per query 3. Number of queries completed during the observation interval 4. Average number of jobs in the system 5. Probability of number of jobs in the system being greater than 1 6. 90-percentile response time 7. 90-percentile waiting time
23
23 Sample Exercise 4.1 The measured throughput in queries / sec for two database systems on two different workloads is as follows: Compare the performance of the two systems and show that : a. System A is better b. System B is better System A B Workload 2 10 30 Workload 1 30 10
24
24 The Bank Problem The board of directors requires answers to several questions that are posed to the IS facility manager. Environment: 3 Fully automated branch offices all located in the same city ATMs are available at 10 locations through out the city 2 mainframes (One used for on-line processing ; the other for batch)
25
25 The Bank Problem (cont’d) 24 tellers in the 3 branch offices Each teller serves an average of 20 customers / hr during peak times ; otherwise, 12 customers / hr Each customer generates 2 one-line transactions on the average Thus, during peak times, IS facility receives an average of 960 (24 x 20 x 2) teller originated transaction / hr
26
26 The Bank Problem (cont’d) ATMs are active 24 hrs / day. During peak volume, each ATM serves an average of 15 customers / hr. Each customer generates an average of 1.2 transactions Thus, during the peak period for ATMs, IS facility receives an average of 180 (10 x 15 x 1.2) ATM transactions / hr ; Otherwise ATM transaction rate is about 50 % of this
27
27 The Bank Problem (cont’d) Measured average response time –Tellers : 1.23 seconds during peak hrs ; 3 seconds are acceptable –ATM : 1.02 seconds during peak hr ; 4 seconds are acceptable
28
28 The Bank Problem (cont’d) Board of directors requires answers to several questions Will the current central IS facility allow for the expected growth of the bank while maintaining the average response time figures at the tellers and at the ATMs within the stated acceptable limits? If not, when will it be necessary to upgrade the current IS environment? Of several possible upgrades (e.g. adding more disks, adding memory, replacing the central processor boards), which represents the best cost-performance trade-off? Should the data processing facility of the bank remain centralized, or should the bank consider a distributed processing alternative?
29
29 Response time vs. load for the car rental C/S system example 0.0 1.0 2.0 3.0 4.0 5.0 6.0 Current Current + 5% Current + 10% Current + 15% Local ReservationRoad Assistance Car Pickup Phone Reservation Response Time (sec) Load (tps)
30
30 Capacity Planning Concept Capacity planning is the determination of the predicted time in the future when system saturation is going to occur, and of the most cost-effective way of delaying system saturation as long as possible
31
31 Questions Why is saturation occurring? In which parts of the system (CPU, disks, memory, queues) will a transaction or job be spending most of its execution time at the saturation points? Which are the best cost-effective alternatives for avoiding (or, at least, delaying) saturation?
32
32 Capacity planning situation 1000200030004000 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0Original Fast Disk Response time (sec) Arrival rate (tph)
33
33 Cap. Planning I/O Vars. Capacity Planning Workload evolution System Parameters Desired service levels Saturation points Cost-effective alternatives
34
34 Common Mistakes in Capacity Planning 1. No Goals 2. Biased Goals 3. Unsystematic Approach 4. Analysis Without Understanding the Problem 5. Incorrect Performance Metrics 6. Unrepresentative Workload 7. Wrong Evaluation Technique
35
35 Common Mistakes in Capacity Planning (cont’d) 8. Overlook Important Parameters 9. Ignore Significant Factors 10. Inappropriate Experimental Design 11. Inappropriate Level of Detail 12. No Analysis 13. Erroneous Analysis 14. No Sensitivity Analysis
36
36 Common Mistakes in Capacity Planning (cont’d) 15. Ignoring Errors in Input 16. Improper Treatment of Outliers 17. Assuming No Change in the Future 18. Ignoring Variability 19. Too Complex Analysis 20. Improper Presentation of Results 21. Ignoring Social Aspects 22. Omitting Assumptions and Limitations
37
37 Capacity Planning of Steps 1.Instrument the system 2.Monitor system usage 3.Characterize workload 4.Predict Performance under different alternatives 5.Select the lowest cost, highest performance alternative
38
38 CPU Utilization: Example CPU Utilization Workload AWorkload BWorkload C Month Total CPU Utilization 123456123456 20 21 25 30 32 33 15 16 18 19 21 22 10 12 15 16 17 18 45 49 58 65 70 73
39
39 Future Predicted CPU Utilization CPU Utilization Workload AWorkload BWorkload C Month Total CPU Utilization 789789 37.1 40.0 43.0 23.6 25.0 26.5 20.3 21.9 23.5 81.0 86.9 93.0
40
40 Systematic Approach to P. Eval 1. State Goals and Define the System 2. List Services and Outcomes 3. Select Metrics 4. List Parameters 5. Select Factors to Study 6. Select Evaluation Technique 7. Select Workload 8. Design Experiments 9. Analyze and Interpret Data 10. Present Results 11. Repeat
41
41 Case Study: Remote Pipes vs RPC System Definition: ServerNetworkClient System
42
42 Case Study (cont’d) Services: Small data transfer or large data transfer Metrics: –No errors and failures. Correct operation only –Rate, Time, Resource per service –Resource = Client, Server, Network
43
43 Case Study (cont’d) This leads to: 1. Elapsed time per call 2. Maximum call rate per unit of time, or equivalently, the time required to complete a block of n successive calls 3. Local CPU time per call 4. Remote CPU time per call 5. Number of bytes sent on the link per call
44
44 Case Study (cont’d) Parameters: –System Parameters: 1. Speed of the local CPU 2. Speed of the remote CPU 3. Speed of the network 4. Operating system overhead for interfacing with the channels 5. Operating system overhead for interfacing with the networks 6. Reliability of the network affecting the number of retransmissions required
45
45 Case Study (cont’d) Parameters : (cont’d) –Workload parameters: 1. Time between successive calls 2. Number and size of the call parameters 3. Number and size of the results 4. Type of channel 5. Other loads on the local and remote CPUs 6. Other loads on the network
46
46 Case Study (cont’d) Factors: 1. Type of channel: Remote pipes and remote procedure calls 2. Size of the network: short distance and long distance 3. Size of the call parameters: small and large 4. Number n of consecutive calls: 1, 2, 4, 8, 16, 32, ···, 512, and 1024
47
47 Case Study (cont’d) Note: –Fixed: type of CPUs and operating systems –Ignore retransmissions due to network error –Measure under no other load on the hosts and the network
48
48 Case Study (cont’d) Evaluation Technique: Prototypes implemented Measurements Use analytical modeling for validation Workload: Synthetic program generating the specified types of channel requests Null channel requests Resource used in monitoring and logging
49
49 Case Study (cont’d) Experimental Design: A full factorial experimental design with 2 3 x 11 = 88 experiments will be used Data Analysis: –Analysis of Variance (ANOVA) for the first three factors –Regression for number n of successive calls Data Presentation: The final results will be plotted as a function of the block size n
50
50 Exercises 1. From published literature, select an article or a report that presents results of a performance evaluation study. Make a list of good and bad points of the study. What would you do different, if you wore asked to repeat the study.
51
51 Exercises (cont’d) 2. Choose a system for performance study. Briefly describe the system and list: a. Services b. Performance metrics c. System parameters d. Workload parameters e. Factors and their ranges f. Evaluation technique g. Workload Justify your choices. Suggestion: Each student should select a different system such as a network, database, processor, and so on and then present the solution to the class.
52
52 Selecting an Evaluation Technique * In all cases, results may be misleading or wrong Criterion 1.Stage 2.Time Required 3.Tools 4.Accuracy* 5.Trade-off Evaluation 6.Cost 7.Saleability Analytical ModelingSimulationMeasurement Any Small Analysts Low Easy Small Low Any Medium Computer Language Moderate Medium Post-Prototype Varies Instrumentation Varies Difficult High
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.