Download presentation
Presentation is loading. Please wait.
Published byErin Gregory Modified over 9 years ago
1
Metrics and Techniques for Evaluating the Performability of Internet Services Pete Broadwell pbwell@cs.berkeley.edu
2
Outline 1.Introduction to performability 2.Performability metrics for Internet services Throughput-based metrics (Rutgers) Latency-based metrics (ROC) 3.Analysis and future directions
3
Goal of ROC project: develop metrics to evaluate new recovery techniques Problem: concept of availability assumes system is either “up” or “down” at a given time Availability doesn’t capture system’s capacity to support degraded service –degraded performance during failures –reduced data quality during high load Motivation
4
What is “performability”? Combination of performance and dependability measures Classical defn: probabilistic (model- based) measure of a system’s “ability to perform” in the presence of faults 1 –Concept from traditional fault-tolerant systems community, ca. 1978 –Has since been applied to other areas, but still not in widespread use 1 J. F. Meyer, Performability Evaluation: Where It Is and What Lies Ahead, 1994
5
Performability Example Discrete-time Markov chain (DTMC) model of a RAID-5 disk array 1 1 Hannu H. Kari, Ph.D. Thesis, Helsinki University of Technology, 1997 p i (t) = probability that system is in state i at time t = failure rate of a single disk drive D = number of data disks = disk repair rate w i (t) = reward (disk I/O operations/sec)
6
Performability for Online Services: Rutgers Study Rich Martin (UCB alum) et al. wanted to quantify tradeoffs between web server designs, using a single metric for both performance and availability Approach: –Performed fault injection on PRESS, a locality-aware, cluster-based web server –Measured throughput of cluster during simulated faults and normal operation
7
REPAIR (human operator) DETECT Degraded Service During a PRESS Component Fault FAILURE STABILIZE RECOVER RESET (optional) Time Throughput Requests/sec
8
Calculation of Average Throughput, Given Faults Throughput Time Degraded throughput Requests/sec Average throughput Normal throughput
9
Behavior of a Performability Metric Effect of improving degraded performance PerformabilityPerformance during faults
10
Behavior of a Performability Metric Effect of improving component availability (shorter MTTR, longer MTTF) MTTRMTTF Performability Aavailability = MTTF MTTF + MTTR
11
Behavior of a Performability Metric Effect of improving overall performance PerformabilityOverall performance (includes normal operation) Most performability metrics scale linearly as component availability, degraded performance and overall performance increase
12
Results of Rutgers Study: Design Comparisons
13
An Alternative Metric: Response Latency Originally, performability metrics were meant to capture end-user experience 1 Latency better describes the experience of an end user of a web site –response time >8 sec = site abandonment = lost income $$ 2 Throughput describes the raw processing ability of a service –best used to quantify expenses 1 J. F. Meyer, Performability Evaluation: Where It Is and What Lies Ahead, 1994 2 Zona Research and Keynote Systems, The Need for Speed II, 2001
14
Effect of Component Failure on Response Latency Time Response latency (sec) REPAIR 8s Abandonment region FAILURE Annoyance region?
15
Issues With Latency As a Performability Metric Modeling concerns: –Human element: retries and abandonment –Queuing issues: buffering and timeouts –Unavailability of load balancer due to faults –Burstiness of workload Latency is more accurately modeled at service, rather than end-to-end 1 Alternate approach: evaluate an existing system 1 M. Merzbacher and D. Patterson, Measuring End-User Availability on the Web: Practical Experience, 2002
16
Analysis Queuing behavior may have a significant effect on latency-based performability evaluation –Long component MTTRs = longer waits, lower latency-based score –High performance in normal case = faster queue reduction after repair, higher latency-based score More study is needed!
17
Future Work Further collaboration with Rutgers on collecting new measurements for latency-based performability analysis Development of more realistic fault and workload models, other performability factors such as data quality Research into methods for conducting automated performability evaluations of web services
18
Metrics and Techniques for Evaluating the Performability of Internet Services Pete Broadwell pbwell@cs.berkeley.edu
19
Back-of-the-Envelope Latency Calculations Attempted to infer average request latency for PRESS servers from Rutgers data set –Required many simplifying assumptions, relying upon knowledge of PRESS server design –Hoped to expose areas in which throughput- and latency-based performability evaluations differ Assumptions: –FIFO queuing w/no timeouts, overflows –Independent faults, constant workload (also the case for throughput-based model) Current models do not capture “completeness” of data returned to user
20
Comparison of Performability Metrics
21
Rutgers calculations for long-term performability Goal: metric that scales linearly with both - performance (throughput) and - availability [MTTF / (MTTF + MTTR)] T n = normal throughput for server A I = ideal availability (.99999) Average throughput (AT) = T n during normal operation + per- component throughput during failure Average availability (AA) = AT / T n Performability = T n x [log(A I ) / log(AA)]
22
Results of Rutgers study: performance comparison
23
Results of Rutgers study: availability comparison
24
Results of Rutgers study: performability comparison
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.