1 An Engineering Approach to Performance Define Performance Metrics Measure Performance Analyze Results Develop Cost x Performance Alternatives: –prototype.

Slides:



Advertisements
Similar presentations
3-1 ©2013 Raj Jain Washington University in St. Louis Selection of Techniques and Metrics Raj Jain Washington.
Advertisements

Chapter 9. Performance Management Enterprise wide endeavor Research and ascertain all performance problems – not just DBMS Five factors influence DB performance.
Topics to be discussed Introduction Performance Factors Methodology Test Process Tools Conclusion Abu Bakr Siddiq.
Web Performance Tuning Lin Wang, Ph.D. US Department of Education Copyright [Lin Wang] [2004]. This work is the intellectual property of the author. Permission.
1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Capacity Planning and Benchmarking (Chapter 9)
Copyright © 2005 Department of Computer Science CPSC 641 Winter PERFORMANCE EVALUATION Often in Computer Science you need to: – demonstrate that.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part IV Capacity Planning Methodology.
ITEC 451 Network Design and Analysis. 2 You will Learn: (1) Specifying performance requirements Evaluating design alternatives Comparing two or more systems.
1 Part IV Capacity Planning Methodology © 1998 Menascé & Almeida. All Rights Reserved.
Understanding Networked Applications: A First Course Chapter 17 by David G. Messerschmitt.
CS 501: Software Engineering Fall 2000 Lecture 14 System Architecture I Data Intensive Systems.
CS CS 5150 Software Engineering Lecture 19 Performance.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 19 Performance of Computer Systems.
CS 501: Software Engineering Fall 2000 Lecture 19 Performance of Computer Systems.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Performance Evaluation
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
1 PERFORMANCE EVALUATION H Often in Computer Science you need to: – demonstrate that a new concept, technique, or algorithm is feasible –demonstrate that.
Measuring Performance Chapter 12 CSE807. Performance Measurement To assist in guaranteeing Service Level Agreements For capacity planning For troubleshooting.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 22 Performance of Computer Systems.
CS533 Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2)
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Capacity planning for web sites. Promoting a web site Thoughts on increasing web site traffic but… Two possible scenarios…
1 Part VI System-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
Copyright © , Software Engineering Research. All rights reserved. Creating Responsive Scalable Software Systems Dr. Lloyd G. Williams Software.
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
Analysis of Simulation Results Andy Wang CIS Computer Systems Performance Analysis.
Advanced Topics INE2720 Web Application Software Development Essential Materials.
Performance of Web Applications Introduction One of the success-critical quality characteristics of Web applications is system performance. What.
Verification & Validation
Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2) 10/4/2015H.Malekinezhad1.
Selecting Evaluation Techniques Andy Wang CIS 5930 Computer Systems Performance Analysis.
Performance Evaluation of Computer Systems Introduction
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Software Performance Testing Based on Workload Characterization Elaine Weyuker Alberto Avritzer Joe Kondek Danielle Liu AT&T Labs.
Chapter 3 System Performance and Models. 2 Systems and Models The concept of modeling in the study of the dynamic behavior of simple system is be able.
1 PREFETCHING INLINES TO IMPROVE WEB SERVER LATENCY Ronald Dodge US Army Daniel Menascé, Ph. D. George Mason University
1 Challenges in Scaling E-Business Sites  Menascé and Almeida. All Rights Reserved. Daniel A. Menascé Department of Computer Science George Mason.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
1 CS/COE0447 Computer Organization & Assembly Language CHAPTER 4 Assessing and Understanding Performance.
Chapter 10 Verification and Validation of Simulation Models
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part V Workload Characterization for the Web.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
1 Common Mistakes in Performance Evaluation (1) 1.No Goals  Goals  Techniques, Metrics, Workload 2.Biased Goals  (Ex) To show that OUR system is better.
CPE 619: Modeling and Analysis of Computer and Communications Systems Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department.
1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web (Book, Chapter 8)
1 CS 501 Spring 2003 CS 501: Software Engineering Lecture 23 Performance of Computer Systems.
1 Web Performance Modeling Issues Daniel A. Menascé Department of Computer Science George Mason University 
© 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final.
Chapter 8 System Management Semester 2. Objectives  Evaluating an operating system  Cooperation among components  The role of memory, processor,
Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental.
Simulation. Types of simulation Discrete-event simulation – Used for modeling of a system as it evolves over time by a representation in which the state.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web.
Common Mistakes in Performance Evaluation The Art of Computer Systems Performance Analysis By Raj Jain Adel Nadjaran Toosi.
OPERATING SYSTEMS CS 3502 Fall 2017
Selecting Evaluation Techniques
Network Performance and Quality of Service
Where are being used the OS?
ITEC 451 Network Design and Analysis
Chapter 10 Verification and Validation of Simulation Models
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Computer Systems Performance Evaluation
Computer Systems Performance Evaluation
Uniprocessor scheduling
Presentation transcript:

1 An Engineering Approach to Performance Define Performance Metrics Measure Performance Analyze Results Develop Cost x Performance Alternatives: –prototype –modeling Assess Alternatives Implement Best Alternative

2 Business in the Internet Age (Business Week, June 22, 1998; numbers in $US billion)

3 Caution Signs Along the Road (Gross & Sager, Business Week, June 22, 1998, p. 166.) There will be jolts and delays along the way for electronic commerce: congestion is the most obvious challenge.

4 Electronic Commerce: online sales are soaring “… IT and electronic commerce can be expected to drive economic growth for many years to come.” The Emerging Digital Economy, US Dept. of Commerce, 1998.

5 What are people saying about Web performance… “Tripod’s Web site is our business. If it’s not fast and reliable, there goes our business.”, Don Zereski, Tripod’s vice-president of Technology (Internet World) “Computer shuts down Amazon.com book sales. The site went down at about 10 a.m. and stayed out of service until 10 p.m.” The Seattle Times, 01/08/98

6 What are people saying about Web performance… “Sites have been concentrating on the right content. Now, more of them -- specially e-commerce sites -- realize that performance is crucial in attracting and retaining online customers.” Gene Shklar, Keynote, The New York Times, 8/8/98

7 What are people saying about Web performance… “Capacity is King.” Mike Krupit, Vice President of Technology, CDnow, 06/01/98 “Being able to manage hit storms on commerce sites requires more than just buying more plumbing.” Harry Fenik, vice president of technology, Zona Research, LANTimes, 6/22/98

8 Introduction

9 Objectives Performance Analysis Performance Analyst = Analysis + Computer System = Mathematician + Computer Systems Person

10 You Will learn Specifying performance requirements Evaluating design alternatives Comparing two or more systems Determining the optimal value of a parameter (system tuning) Finding the performance bottleneck (bottleneck identification)

11 You Will learn (cont’d) Characterizing the load on the system (workload characterization) Determining the number and size of components (capacity planning) Predicting the performance at future loads (forecasting)

12 Performance Analysis Objectives Involve: Procurement Improvement Capacity Planning Design

13 Performance Improvement Procedure Analyze Cost Effectiveness START STOP Understand System Implement Modification Formulate Improvement Hypothesis Analyze Operations Test Specific Hypothesis Test Effectiveness of Modification Satisfactory NoneNone Invalid Unsatisfactory

14 Basic Terms System: Any collection of hardware, software, and firmware Metrics: The criteria used to evaluate the performance of the system components Workloads: The requests made by the users of the system

15 Examples of Performance Indexes External Indexes Turnaround Time Response Time Through Put Capacity Availability Reliability

16 Examples of Performance Indexes Internal Indexes CPU Utilization Overlap of Activities Multiprog. Stretch Factor Multiprog. Level Paging Rate Reaction time

17 Example I What performance metrics should be used to compare the performance of the following systems. 1. Two disk drivers? 2. Two transaction-processing systems? 3. Two packet-retransmission algorithms?

18 Example II Which type of monitor (software or hardware) would be more suitable for measuring each of the following quantities: 1. Number of Instructions executed by a processor? 2. Degree of multiprogramming on a timesharing system? 3. Response time of packets on a network?

19 Example III The number of packets lost on two links was measured for four file size as shown below: File SizeLink ALink B Which link is better?

20 Example IV In order to compare the performance of two cache replacement algorithms: 1. What type of simulation model should be used? 2. How long should the simulation be run? 3. What can be done to get the same accuracy with a shorter run? 4. How can one decide if the random-number generator in the simulation is a good generator?

21 Example V The average response time of a database system is three seconds. During a one-minute observation interval, the idle time on a system was ten seconds. Using a queueing model for the system, determine the following:

22 Example V (cont’d) 1. System utilization 2. Average service time per query 3. Number of queries completed during the observation interval 4. Average number of jobs in the system 5. Probability of number of jobs in the system being greater than percentile response time percentile waiting time

23 Sample Exercise 4.1 The measured throughput in queries / sec for two database systems on two different workloads is as follows: Compare the performance of the two systems and show that : a. System A is better b. System B is better System A B Workload Workload

24 The Bank Problem The board of directors requires answers to several questions that are posed to the IS facility manager. Environment: 3 Fully automated branch offices all located in the same city ATMs are available at 10 locations through out the city 2 mainframes (One used for on-line processing ; the other for batch)

25 The Bank Problem (cont’d) 24 tellers in the 3 branch offices Each teller serves an average of 20 customers / hr during peak times ; otherwise, 12 customers / hr Each customer generates 2 one-line transactions on the average Thus, during peak times, IS facility receives an average of 960 (24 x 20 x 2) teller originated transaction / hr

26 The Bank Problem (cont’d) ATMs are active 24 hrs / day. During peak volume, each ATM serves an average of 15 customers / hr. Each customer generates an average of 1.2 transactions Thus, during the peak period for ATMs, IS facility receives an average of 180 (10 x 15 x 1.2) ATM transactions / hr ; Otherwise ATM transaction rate is about 50 % of this

27 The Bank Problem (cont’d) Measured average response time –Tellers : 1.23 seconds during peak hrs ; 3 seconds are acceptable –ATM : 1.02 seconds during peak hr ; 4 seconds are acceptable

28 The Bank Problem (cont’d) Board of directors requires answers to several questions Will the current central IS facility allow for the expected growth of the bank while maintaining the average response time figures at the tellers and at the ATMs within the stated acceptable limits? If not, when will it be necessary to upgrade the current IS environment? Of several possible upgrades (e.g. adding more disks, adding memory, replacing the central processor boards), which represents the best cost-performance trade-off? Should the data processing facility of the bank remain centralized, or should the bank consider a distributed processing alternative?

29 Response time vs. load for the car rental C/S system example Current Current + 5% Current + 10% Current + 15% Local ReservationRoad Assistance Car Pickup Phone Reservation Response Time (sec) Load (tps)

30 Capacity Planning Concept Capacity planning is the determination of the predicted time in the future when system saturation is going to occur, and of the most cost-effective way of delaying system saturation as long as possible

31 Questions Why is saturation occurring? In which parts of the system (CPU, disks, memory, queues) will a transaction or job be spending most of its execution time at the saturation points? Which are the best cost-effective alternatives for avoiding (or, at least, delaying) saturation?

32 Capacity planning situation Original Fast Disk Response time (sec) Arrival rate (tph)

33 Cap. Planning I/O Vars. Capacity Planning Workload evolution System Parameters Desired service levels Saturation points Cost-effective alternatives

34 Common Mistakes in Capacity Planning 1. No Goals 2. Biased Goals 3. Unsystematic Approach 4. Analysis Without Understanding the Problem 5. Incorrect Performance Metrics 6. Unrepresentative Workload 7. Wrong Evaluation Technique

35 Common Mistakes in Capacity Planning (cont’d) 8. Overlook Important Parameters 9. Ignore Significant Factors 10. Inappropriate Experimental Design 11. Inappropriate Level of Detail 12. No Analysis 13. Erroneous Analysis 14. No Sensitivity Analysis

36 Common Mistakes in Capacity Planning (cont’d) 15. Ignoring Errors in Input 16. Improper Treatment of Outliers 17. Assuming No Change in the Future 18. Ignoring Variability 19. Too Complex Analysis 20. Improper Presentation of Results 21. Ignoring Social Aspects 22. Omitting Assumptions and Limitations

37 Capacity Planning of Steps 1.Instrument the system 2.Monitor system usage 3.Characterize workload 4.Predict Performance under different alternatives 5.Select the lowest cost, highest performance alternative

38 CPU Utilization: Example CPU Utilization Workload AWorkload BWorkload C Month Total CPU Utilization

39 Future Predicted CPU Utilization CPU Utilization Workload AWorkload BWorkload C Month Total CPU Utilization

40 Systematic Approach to P. Eval 1. State Goals and Define the System 2. List Services and Outcomes 3. Select Metrics 4. List Parameters 5. Select Factors to Study 6. Select Evaluation Technique 7. Select Workload 8. Design Experiments 9. Analyze and Interpret Data 10. Present Results 11. Repeat

41 Case Study: Remote Pipes vs RPC System Definition: ServerNetworkClient System

42 Case Study (cont’d) Services: Small data transfer or large data transfer Metrics: –No errors and failures. Correct operation only –Rate, Time, Resource per service –Resource = Client, Server, Network

43 Case Study (cont’d) This leads to: 1. Elapsed time per call 2. Maximum call rate per unit of time, or equivalently, the time required to complete a block of n successive calls 3. Local CPU time per call 4. Remote CPU time per call 5. Number of bytes sent on the link per call

44 Case Study (cont’d) Parameters: –System Parameters: 1. Speed of the local CPU 2. Speed of the remote CPU 3. Speed of the network 4. Operating system overhead for interfacing with the channels 5. Operating system overhead for interfacing with the networks 6. Reliability of the network affecting the number of retransmissions required

45 Case Study (cont’d) Parameters : (cont’d) –Workload parameters: 1. Time between successive calls 2. Number and size of the call parameters 3. Number and size of the results 4. Type of channel 5. Other loads on the local and remote CPUs 6. Other loads on the network

46 Case Study (cont’d) Factors: 1. Type of channel: Remote pipes and remote procedure calls 2. Size of the network: short distance and long distance 3. Size of the call parameters: small and large 4. Number n of consecutive calls: 1, 2, 4, 8, 16, 32, ···, 512, and 1024

47 Case Study (cont’d) Note: –Fixed: type of CPUs and operating systems –Ignore retransmissions due to network error –Measure under no other load on the hosts and the network

48 Case Study (cont’d) Evaluation Technique: Prototypes implemented  Measurements Use analytical modeling for validation Workload: Synthetic program generating the specified types of channel requests Null channel requests  Resource used in monitoring and logging

49 Case Study (cont’d) Experimental Design: A full factorial experimental design with 2 3 x 11 = 88 experiments will be used Data Analysis: –Analysis of Variance (ANOVA) for the first three factors –Regression for number n of successive calls Data Presentation: The final results will be plotted as a function of the block size n

50 Exercises 1. From published literature, select an article or a report that presents results of a performance evaluation study. Make a list of good and bad points of the study. What would you do different, if you wore asked to repeat the study.

51 Exercises (cont’d) 2. Choose a system for performance study. Briefly describe the system and list: a. Services b. Performance metrics c. System parameters d. Workload parameters e. Factors and their ranges f. Evaluation technique g. Workload Justify your choices. Suggestion: Each student should select a different system such as a network, database, processor, and so on and then present the solution to the class.

52 Selecting an Evaluation Technique * In all cases, results may be misleading or wrong Criterion 1.Stage 2.Time Required 3.Tools 4.Accuracy* 5.Trade-off Evaluation 6.Cost 7.Saleability Analytical ModelingSimulationMeasurement Any Small Analysts Low Easy Small Low Any Medium Computer Language Moderate Medium Post-Prototype Varies Instrumentation Varies Difficult High