Presentation is loading. Please wait.

Presentation is loading. Please wait.

M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS.

Similar presentations


Presentation on theme: "M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS."— Presentation transcript:

1 M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS

2 Overview 2 One or more systems, real or hypothetical You want to evaluate their performance What technique do you choose? Analytic Modeling? Simulation? Measurement? What metrics do you use?

3 Outline 3  Selecting an Evaluation Technique  Selecting Performance Metrics  Case Study  Commonly Used Performance Metrics  Setting Performance Requirements  Case Study

4 Selecting an Evaluation Technique (1 of 4) 4 Which life-cycle stage the system is in? Measurement only when something exists If new, analytical modeling or simulation are only options When are results needed? (often, yesterday!) Analytic modeling only choice Simulations and measurement can be same What tools and skills are available? Maybe languages to support simulation Tools to support measurement (e.g.: packet sniffers, source code to add monitoring hooks) Skills in analytic modeling (e.g.: queuing theory)

5 Selecting an Evaluation Technique (2 of 4) 5 Level of accuracy desired? Analytic modeling coarse (if it turns out to be accurate, even the analysts are surprised!) Simulation has more details, but may abstract key system details Measurement may sound real, but workload, configuration, etc., may still be missing Accuracy can be high to none without proper design Even with accurate data, still need to draw proper conclusions E.g.: so response time is 10.2351 with 90% confidence. So what? What does it mean?

6 Selecting an Evaluation Technique (3 of 4) 6 What are the alternatives? Can explore trade-offs easiest with analytic models, simulations moderate, measurement most difficult Cost? Measurement generally most expensive Analytic modeling cheapest (pencil and paper) Simulation often cheap but some tools expensive Traffic generators, network simulators

7 Selecting an Evaluation Technique (4 of 4) 7 Saleability? Much easier to convince people with measurements Most people are skeptical of analytic modeling results since they are hard to understand Often validate with simulation before using Can use two or more techniques Validate one with another Most high-quality performance analysis papers have analytic model + simulation or measurement

8 Summary Table for Evaluation Technique Selection 8 CriterionModelingSimulationMeasurement 1. StageAnyAnyPrototype+ 2. TimeSmallMediumVaries required 3. ToolsAnalystsSome Instrumentation languages 4. AccuracyLowModerateVaries 5. Trade-offEasyModerateDifficult evaluation 6. CostSmallMediumHigh 7. SaleabiltyLowMediumHigh More important Less important

9 Hybrid modeling Sometimes it is helpful to use two or more techniques simultaneously Do not trust the results of a simulation model until they have been validated by analytical modeling or measurements. Do not trust the results of an analytical model until they have been validated by a simulation model or measurements. Do not trust the results of a measurement until they have been validated by simulation or analytical modeling.

10 Outline 10  Selecting an Evaluation Technique  Selecting Performance Metrics  Case Study  Commonly Used Performance Metrics  Setting Performance Requirements  Case Study

11 Selecting Performance Metrics First list all the services of your system For each service that requested, we ask if the service done or not done

12 Examples A gateway in a computer network offers the service of forwarding packets to the specified destinations on heterogeneous networks. When presented with a packet, it may forward the packet correctly, it may forward it to the wrong destination, or it may be down, in which case it will not forward it at all. A database offers the service of responding to queries. When presented with a query, it may answer correctly, it may answer incorrectly, or it may be down and not answer it at all.

13 system performs the service correctly If the system performs the service correctly, its performance is measured by the time taken to perform the service, the rate at which the service is performed, and the resources consumed while performing the service. These three metrics related to time-rate-resource for successful performance are also called responsiveness, productivity, and utilization metrics The utilization gives an indication of the percentage of time the resources of the gateway are busy for the given load level. The resource with the highest utilization is called the bottleneck.

14 system performs the service incorrectly If the system performs the service incorrectly, an error is occurred. We classify errors and determine the probabilities of each class of errors. For example, in the case of the gateway, we may want to find the probability of single-bit errors, two-bit errors, and so on. We may also want to find the probability of a packet being partially delivered (fragment).

15 system does not perform the service If the system does not perform the service, it is said to be down, failed, or unavailable. Once again, we classify the failure modes and determine the probabilities of each class. For example, the gateway may be unavailable 0.01% of the time due to processor failure and 0.03% due to software failure.

16 speed, reliability, and availability The metrics associated with the three outcomes, namely successful service, error, and unavailability, are also called speed, reliability, and availability metrics.

17 Selecting Performance Metrics (1 of 3) 17 Request System Done Not Done Correct Not Correct Error i Probability Time between Event k Duration Time between Time Rate Resource Speed Reliability Availability Possible Outcomes responsiv eness producti vity utiliza tion

18 Selecting Performance Metrics (2 of 3) 18 Mean is what usually matters But do not overlook the effect of variability Individual vs. Global (systems shared by many users) May be at odds Increase individual may decrease global E.g.: response time at the cost of throughput Increase global may not be most fair E.g.: throughput of cross traffic Performance optimizations of bottleneck have most impact E.g.: Response time of Web request Client processing 1s, Latency 500ms, Server processing 10s  Total is 11.5 s Improve client 50%?  11 s Improve server 50%?  6.5 s

19 Selecting Performance Metrics (3 of 3) 19 May be more than one set of metrics Resources: Queue size, CPU Utilization, Memory Use … Criteria for selecting subset, choose: Low variability – need fewer repetitions Non redundancy – If two metrics give essentially the same information, it is less confusing to study only one E.g.: queue size and delay may provide identical information Completeness – should capture All possible outcomes E.g.: one disk may be faster but may return more errors so add reliability measure

20 Outline 20  Selecting an Evaluation Technique  Selecting Performance Metrics  Case Study  Commonly Used Performance Metrics  Setting Performance Requirements  Case Study

21 Case Study (1 of 5) 21 Computer system of end-hosts sending packets through routers Congestion occurs when number of packets at router exceed buffering capacity Goal: compare two congestion control algorithms User sends block of packets to destination; Four possible outcomes: A) Some delivered in order B) Some delivered out of order C) Some delivered more than once D) Some dropped

22 Case Study (2 of 5) 22 For A), straightforward metrics exist: 1) Response time: delay for individual packet 2) Throughput: number of packets per unit time 3) Processor time per packet at source 4) Processor time per packet at destination 5) Processor time per packet at router Since large response times can cause extra (unnecessary) retransmissions: 6) Variability in response time (is also important)

23 Case Study (3 of 5) 23 For B), out-of-order packets cannot be delivered to the user immediately They are often discarded (considered dropped) Alternatively, they are stored in destination buffers awaiting arrival of intervening packets 7) Probability of out of order arrivals For C), consume resources without any use 8) Probability of duplicate packets For D), for many reasons is undesirable 9) Probability of lost packets Also, excessive loss can cause disconnection 10) Probability of disconnect

24 Case Study (4 of 5) 24 Since a multi-user system, want fairness: 11) Fairness = A function of variability of throughput across users; for any given set of user throughputs (x 1, x 2, …, x n ), the fairness is: f(x 1, x 2, …, x n ) = (  x i ) 2 / (n  x i 2 ) Index between 0 and 1 All users get same, then 1 If k users get equal throughput and n-k get zero, than index is k/n

25 Case Study (5 of 5) 25 After a few experiments (pilot tests) Found throughput and delay redundant higher throughput had higher delay instead, combine with power = throughput/delay A higher power meant either a higher throughput or a lower delay; in either case it was considered better than a lower power Found variance in response time redundant with probability of duplication and probability of disconnection Drop variance in response time Thus, left with nine metrics

26 Outline 26 Selecting an Evaluation Technique Selecting Performance Metrics Case Study Commonly Used Performance Metrics Setting Performance Requirements Case Study

27 Commonly Used Performance Metrics 27 Response Time Turn around time Reaction time Stretch factor Throughput Operations/second Capacity Efficiency Utilization Reliability Uptime MTTF

28 Response Time (1 of 2) 28 Interval between user’s request and system response Time User’s Request System’s Response But simplistic since requests and responses are not instantaneous Users spend time typing the request and the system takes time to output the response

29 Response Time (2 of 2) 29 Can have two measures of response time Both ok, but 2 preferred if execution long Think time : Time until next request Time User Finishes Request System Starts Response User Starts Request System Finishes Response System Starts Execution Reaction Time Response Time 1 Response Time 2 Think Time

30 Response Time+ 30 Turnaround time – time between submission of a job and completion of output For batch job systems responsiveness is measured by turnaround time Reaction time - Time between submission of a request and beginning of execution Usually need to measure inside system since nothing externally visible Stretch factor – ratio of response time at a particular load to the response time at minimum load Most systems have higher response time as load increases For a timesharing system, for example, the stretch factor is defined as the ratio of the response time with multiprogramming to that without multiprogramming

31 Throughput (1 of 2) 31 Rate at which requests can be serviced by system (requests per unit time) Batch: jobs per second Interactive: requests per second CPUs Millions of Instructions Per Second (MIPS) Millions of Floating-Point Ops per Sec (MFLOPS) Networks: pkts per second or bits per second Transactions processing: Transactions Per Second (TPS)

32 Throughput (2 of 2) After a certain load, the throughput stops increasing; in most cases, it may even start decreasing. Nominal Capacity: maximum achievable throughput under ideal workload conditions For computer networks, the nominal capacity is called the bandwidth and is usually expressed in bits per second. Often the response time at maximum throughput is too high to be acceptable. In such cases, it is more interesting to know the maximum throughput achievable without exceeding a prespecified response time limit. This may be called the usable capacity of the system. In many applications, the knee of the throughput or the response- time curve is considered the optimal operating point. Knee Capacity: throughput at the knee

33 Throughput (2 of 3) 33 Throughput increases as load increases, to a point Thrput Knee Capacity Usable Capacity Nominal Capacity Load Response Time Load Nominal capacity is ideal (e.g.: 10 Mbps) Usable capacity is achievable (e.g.: 9.8 Mbps) Knee is where response time goes up rapidly for small increase in throughput

34 Efficiency 34 Ratio of maximum achievable throughput For example, if the maximum throughput from a 100-Mbps is only 85 Mbps, its efficiency is 85% For multiprocessor, ratio of n-processor to that of one-processor (in MIPS or MFLOPS) Efficiency Number of Processors

35 Utilization 35 Typically, fraction of time resource is busy serving requests Time not being used is idle time System managers often want to balance resources to have same utilization E.g.: equal load on CPUs But may not be possible. For Processors – busy / total makes sense For other resources, such as memory, only a fraction of the resource may be used at a given time; their utilization is measured as the average fraction used over an interval.

36 Miscellaneous Metrics 36 Reliability Probability of errors or mean time between errors (error-free seconds) Availability Fraction of time system is available to service requests (fraction not available is downtime) Mean Time To Failure (MTTF) is mean uptime Useful, since availability high (downtime small) may still be frequent and no good for long request Cost/Performance ratio Total cost / Throughput, for comparing 2 systems Ex: For Transaction Processing system may want Dollars / TPS

37 Utility Classification 37 HB – Higher is better (ex: throughput) LB - Lower is better (ex: response time) NB – Nominal is best (ex: utilization) Utility Metric Better LB Utility Metric Better HB Utility Metric Best NB

38 Outline 38 Selecting an Evaluation Technique Selecting Performance Metrics Case Study Commonly Used Performance Metrics Setting Performance Requirements Case Study

39 Setting Performance Requirements (1 of 2) 39 Consider these typical requirement statements The system should be both processing and memory efficient. It should not create excessive overhead There should be an extremely low probability that the network will duplicate a packet, deliver it to a wrong destination, or change the data What’s wrong?

40 Setting Performance Requirements (2 of 2) 40 General Problems Nonspecific – no numbers. Only qualitative words (rare, low, high, extremely small) Nonmeasureable – no way to measure and verify that the system meets requirements Nonacceptable – numerical values of requirements are set based upon what can be achieved or on what looks good; If set on what can be achieved, they may turn out to be too low Nonrealizable – numbers based on what sounds good, but not realstic Nonthorough – no attempt is made to specify all the parameters

41 Outline 41 Selecting an Evaluation Technique Selecting Performance Metrics Case Study Commonly Used Performance Metrics Setting Performance Requirements Case Study

42 Setting Performance Requirements: Case Study (1 of 2) 42 Performance for high-speed LAN Speed – if packet delivered, time taken to do so is important A) Access delay should be less than 1 sec B) Sustained throughput at least 80 Mb/s Reliability A) Prob of bit error less than 10 -7 B) Prob of frame error less than 1% C) Prob of frame error not caught 10 -15 D) Prob of frame miss-delivered due to uncaught error 10 -18 E) Prob of duplicate 10 -5 F) Prob of losing frame less than 1%

43 Setting Performance Requirements: Case Study (2 of 2) 43 Availability A) Mean time to initialize LAN < 15 msec B) Mean time between LAN inits > 1 minute C) Mean time to repair < 1 hour All above values were checked for realizeability by modeling, showing that LAN systems satisfying the requirements were possible

44 Part I: Things to Remember 44 Systematic Approach Define the system, list its services, metrics, parameters, decide factors, evaluation technique, workload, experimental design, analyze the data, and present results Selecting Evaluation Technique The life-cycle stage is the key. Other considerations are: time available, tools available, accuracy required, trade-offs to be evaluated, cost, and saleability of results.

45 Part I: Things to Remember 45 Selecting Metrics For each service list time, rate, and resource consumption For each undesirable outcome, measure the frequency and duration of the outcome Check for low-variability, non-redundancy, and completeness. Performance requirements: Should be SMART. Specific, measurable, acceptable, realizable, and thorough.


Download ppt "M. Keshtgary Spring 91 Chapter 3 SELECTION OF TECHNIQUES AND METRICS."

Similar presentations


Ads by Google