1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Capacity Planning and Benchmarking (Chapter 9)

Slides:



Advertisements
Similar presentations
Analysis of Computer Algorithms
Advertisements

WLAN Validation 1 Motorola Public Document Classification, October 2011 MODULE 18 WLAN VALIDATION.
1 Lecture 2: Metrics to Evaluate Performance Topics: Benchmark suites, Performance equation, Summarizing performance with AM, GM, HM Video 1: Using AM.
SE-292 High Performance Computing
Configuration management
Software change management
Interactive lesson about operating system
1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Selection of Techniques and Metrics (Chapter 3)
Copyright © 2007 Ramez Elmasri and Shamkant B. Navathe Slide
1 Chapter 16 Tuning RMAN. 2 Background One of the hardest chapters to develop material for Tuning RMAN can sometimes be difficult Authors tried to capture.
13 Copyright © 2005, Oracle. All rights reserved. Monitoring and Improving Performance.
QA practitioners viewpoint
Web Performance Tuning Lin Wang, Ph.D. US Department of Education Copyright [Lin Wang] [2004]. This work is the intellectual property of the author. Permission.
Chapter 4 Memory Management Basic memory management Swapping
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 2: Capacity.
PRODUCTION AND OPERATIONS MANAGEMENT
1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
Capacity Planning IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Proposal by CA Technologies, IBM, SAP, Vnomic
MINERVA: an automated resource provisioning tool for large-scale storage systems G. Alvarez, E. Borowsky, S. Go, T. Romer, R. Becker-Szendy, R. Golding,
Database System Concepts and Architecture
Processes Management.
Global Analysis and Distributed Systems Software Architecture Lecture # 5-6.
Introduction to Queuing Theory
Determining How Costs Behave
1 1999/Ph 514: Channel Access Concepts EPICS Channel Access Concepts Bob Dalesio LANL.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 19 Scheduling IV.
ITEC 451 Network Design and Analysis. 2 You will Learn: (1) Specifying performance requirements Evaluating design alternatives Comparing two or more systems.
Chapter 4 M. Keshtgary Spring 91 Type of Workloads.
What will my performance be? Resource Advisor for DB admins Dushyanth Narayanan, Paul Barham Microsoft Research, Cambridge Eno Thereska, Anastassia Ailamaki.
Project 4 U-Pick – A Project of Your Own Design Proposal Due: April 14 th (earlier ok) Project Due: April 25 th.
1 CSSE 477 – A bit more on Performance Steve Chenoweth Friday, 9/9/11 Week 1, Day 2 Right – Googling for “Performance” gets you everything from Lady Gaga.
Statistics CSE 807.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Performance Evaluation
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
Measuring Performance Chapter 12 CSE807. Performance Measurement To assist in guaranteeing Service Level Agreements For capacity planning For troubleshooting.
CS533 Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2)
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
November 1, 2004Introduction to Computer Security ©2004 Matt Bishop Slide #29-1 Chapter 33: Virtual Machines Virtual Machine Structure Virtual Machine.
Objectives To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization.
Continuous resource monitoring for self-predicting DBMS Dushyanth Narayanan 1 Eno Thereska 2 Anastassia Ailamaki 2 1 Microsoft Research-Cambridge, 2 Carnegie.
Performance of Web Applications Introduction One of the success-critical quality characteristics of Web applications is system performance. What.
Performance Concepts Mark A. Magumba. Introduction Research done on 1058 correspondents in 2006 found that 75% OF them would not return to a website that.
Modeling and Performance Evaluation of Network and Computer Systems Introduction (Chapters 1 and 2) 10/4/2015H.Malekinezhad1.
Performance Evaluation of Computer Systems Introduction
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Introduction to Experimental Design
Test Loads Andy Wang CIS Computer Systems Performance Analysis.
Analysis of Simulation Results Chapter 25. Overview  Analysis of Simulation Results  Model Verification Techniques  Model Validation Techniques  Transient.
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
Replicating Memory Behavior for Performance Skeletons Aditya Toomula PC-Doctor Inc. Reno, NV Jaspal Subhlok University of Houston Houston, TX By.
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Copyright 2007, Information Builders. Slide 1 Machine Sizing and Scalability Mark Nesson, Vashti Ragoonath June 2008.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Network management Network management refers to the activities, methods, procedures, and tools that pertain to the operation, administration, maintenance,
for all Hyperion video tutorial/Training/Certification/Material Essbase Optimization Techniques by Amit.
Simulation. Types of simulation Discrete-event simulation – Used for modeling of a system as it evolves over time by a representation in which the state.
If you have a transaction processing system, John Meisenbacher
Test Loads Andy Wang CIS Computer Systems Performance Analysis.
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
1 Decision Making ADMI 6510 Simulation Key Sources: Data Analysis and Decision Making (Albrigth, Winston and Zappe) An Introduction to Management Science:
Network Performance and Quality of Service
Andy Wang CIS 5930 Computer Systems Performance Analysis
ITEC 451 Network Design and Analysis
Virtual Memory: Working Sets
Approximate Mean Value Analysis of a Database Grid Application
Presentation transcript:

1 CS533 Modeling and Performance Evaluation of Network and Computer Systems Capacity Planning and Benchmarking (Chapter 9)

2 Introduction Capacity management tries to ensure current computing resources provide highest performance (Present) –Tune current resources Capacity planning tries to ensure adequate resources will be available to meet future workload needs (Future) –Buy more resources Do not plan a bridge capacity by counting the number of people who swim across the river today. – Heard at a presentation

3 Outline Introduction Steps in Planning and Management Problems in Capacity Planning Common Mistakes in Benchmarking Misc

4 Steps in Capacity Planning and Management Instrument system –Hooks, counters to record current usage Monitor system usage –Gather data, analyze and summarize Characterize workload Predict performance Management –Input to simulation model or to rules for tuning Often detailed, system specific –Or try out tuning descisions, since low cost Planning –Models generally less detailed, coarse since alternatives may not exist –Ex: increase power by X every Y years

5 Problems in Capacity Planning (1 of 5) No standard terminology –Many vendors have different definitions –Some mean only tuning, while others mean both –Some dont include workload characterization No standard definition of capacity –Could be throughput, but then jobs per second, transactions per second, instructions per second or bits per second? –Could be maximum number of users (workload components)

6 Problems in Capacity Planning (2 of 5) Different capacities for same system –nominal, usable, knee No standard workload unit –Measuring capacity in workload units (say, users) requires a detailed characterization Varies from one environment to next Forecasting future apps difficult –Assume past is like future Ok, unless new technology changes use –Ex: low cost clients may change server load, such as streaming to PDAs

7 Problems in Capacity Planning (3 of 5) No uniformity among systems from different vendors –Same workload may take different resources on different systems –So, may need vendor-specific benchmarks and simulations But this may bring in biases Model inputs cannot always be measured –For example, think time can be thinking between commands or a coffee break –Even tougher if monitoring, workload analysis and modeling tools built separately (vendors) since formats may be incompatible

8 Problems in Capacity Planning (4 of 5) Validating model projections difficult –Typically, change inputs to model and see if matches changed workload But changing real workload difficult Distributed environments complex –Many components needed to make accurate modeling expensive –Used to be many users for few components so even though variable, could get accurate answer Todays workstation uses very different, have few users for many components

9 Problems in Capacity Planning (5 of 5) Performance is only one part of capacity planning –Other major factor is cost –Much of todays cost goes to installation, maintenance, floor space, etc. Proper planning depends upon proper workload characterization and benchmarking –Workload characterization we can handle –Benchmarking often has mistakes (next)

10 Outline Introduction Steps in Planning and Management Problems in Capacity Planning Common Mistakes in Benchmarking Misc

11 Common Mistakes in Benchmarking (1 of 5) Only Average Behavior Represented in Test Workload –Variation not represented but high load may cause synch issues or determine performance Skewness of Device Demands Ignored –Ex: I/O requests assumed uniform across disks, but often come to same disk Loading Level Controlled Inappropriately –To increase load, can add users, decrease think time, or increase resource demanded per user Decrease think time easiest, but often ignores cache misses Increases resources per user may change workload

12 Common Mistakes in Benchmarking (2 of 5) Caching Effects Ignored –Often, order of requests ignored but this matters greatly for cache performance Buffering Sizes Not Appropriate –A small change in buffer sizes can result in a large change in performance Ex: slightly smaller MTU can greatly increase network packets Inaccuracies due to Sampling Ignored –Sampling is periodic, which may lead to errors (example next)

13 Example of Sampling Inaccuracy Device is busy 1% of the time (p = 0.01) Sample every second for 10 minutes (n=600) –So, 600x0.01 expect 6 busy times Random event will have stddev sqrt(np(1-p)) –So, stddev = 2.43 Roughly, 33% values outside of one stddev –So, 1/3 rd chance of 0.6% or 1.4%

14 Common Mistakes in Benchmarking (3 of 5) Ignoring Monitoring Overhead –Data collection adds overhead and introduces error –Try and account for overhead in analysis Not Validating Measurements –Analytic models and Simulations routinely validated, but not measurements –But could be errors in setup so should cross-check results Not Ensuring Same Initial Conditions –Each run may depend upon starting conditions –Should either control conditions or check sensitivity of results to starting conditions

15 Common Mistakes in Benchmarking (4 of 5) Not Measuring Transient Performance –Most tools predict performance under stable conditions –But if system takes lots of time to be stable, performance before may be more important Using Device Utilization for Comparison –Generally, lower utilization better but it may be that higher utilization (because system can handle more requests) is better

16 Common Mistakes in Benchmarking (5 of 5) Collecting Too Much Data but Doing Very Little Analysis –Sometimes getting to data collection takes lots of work, but then have lots of data –No time left for analysis and lots to do –Instead, form teams with analysis experts and allocate time for analysis

17 Outline Introduction Steps in Planning and Management Problems in Capacity Planning Common Mistakes in Benchmarking Misc

18 Benchmark Games Differing configurations –Memory, CPU, etc. Modified compilers –Take advantage of knowing workload that is compiling Very small benchmarks –Fit in cache, avoid memory problems Manually translated benchmarks –May further tune –Valid, sometimes, but machine performance may depend heavily upon skill of translator

19 Remote Terminal Emulation Often need ways to induce server load remotely One client machine can often act like many clients –Need to be sure that bottleneck is not client machine –Need to be wary of contention affects –Ex: httperf Note, network emulation may sometimes be needed, too –Ex: NIST Net, NS