Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental.

Slides:



Advertisements
Similar presentations
3-1 ©2013 Raj Jain Washington University in St. Louis Selection of Techniques and Metrics Raj Jain Washington.
Advertisements

PERFORMANCE ANALYSIS OF MULTIPLE THREADS/CORES USING THE ULTRASPARC T1 (NIAGARA) Unique Chips and Systems (UCAS-4) Dimitris Kaseridis & Lizy K. John The.
G. Alonso, D. Kossmann Systems Group
Statistical Methods in Computer Science Hypothesis Life-cycle Ido Dagan.
Workloads Experimental environment prototype real sys exec- driven sim trace- driven sim stochastic sim Live workload Benchmark applications Micro- benchmark.
Variability in Architectural Simulations of Multi-threaded Workloads Alaa R. Alameldeen and David A. Wood University of Wisconsin-Madison
ITEC 451 Network Design and Analysis. 2 You will Learn: (1) Specifying performance requirements Evaluating design alternatives Comparing two or more systems.
1 Statistical Inference H Plan: –Discuss statistical methods in simulations –Define concepts and terminology –Traditional approaches: u Hypothesis testing.
Empirical Analysis Doing and interpreting empirical work.
Copyright 2004 David J. Lilja1 Performance metrics What is a performance metric? Characteristics of good metrics Standard processor and system metrics.
Copyright 2004 David J. Lilja1 What Do All of These Means Mean? Indices of central tendency Sample mean Median Mode Other means Arithmetic Harmonic Geometric.
Project 4 U-Pick – A Project of Your Own Design Proposal Due: April 14 th (earlier ok) Project Due: April 25 th.
VLSI Systems--Spring 2009 Introduction: --syllabus; goals --schedule --project --student survey, group formation.
Statistics CSE 807.
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
Performance Evaluation
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
Adaptive Cache Compression for High-Performance Processors Alaa R. Alameldeen and David A.Wood Computer Sciences Department, University of Wisconsin- Madison.
A Characterization of Processor Performance in the VAX-11/780 From the ISCA Proceedings 1984 Emer & Clark.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Chapter 1: Introduction to Statistics
1 Computer Performance: Metrics, Measurement, & Evaluation.
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
1 The Performance Potential for Single Application Heterogeneous Systems Henry Wong* and Tor M. Aamodt § *University of Toronto § University of British.
© 2003, Carla Ellis Experimentation in Computer Systems Research Why: “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you.
Statistics & Biology Shelly’s Super Happy Fun Times February 7, 2012 Will Herrick.
Computer Science Department University of Pittsburgh 1 Evaluating a DVS Scheme for Real-Time Embedded Systems Ruibin Xu, Daniel Mossé and Rami Melhem.
Lecture 2b: Performance Metrics. Performance Metrics Measurable characteristics of a computer system: Count of an event Duration of a time interval Size.
Selecting Evaluation Techniques Andy Wang CIS 5930 Computer Systems Performance Analysis.
Performance Evaluation of Computer Systems Introduction
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
1 CHAPTER 2 THE ROLE OF PERFORMANCE. 2 Performance Measure, Report, and Summarize Make intelligent choices Why is some hardware better than others for.
Test Loads Andy Wang CIS Computer Systems Performance Analysis.
© 2003, Carla Ellis Simulation Techniques Overview Simulation environments emulation exec- driven sim trace- driven sim stochastic sim Workload parameters.
Introduction Osborn. Daubert is a benchmark!!!: Daubert (1993)- Judges are the “gatekeepers” of scientific evidence. Must determine if the science is.
Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)
ICOM 6115: Computer Systems Performance Measurement and Evaluation August 11, 2006.
EXPERIMENTAL DESIGN Science answers questions with experiments.
10.1: Confidence Intervals Falls under the topic of “Inference.” Inference means we are attempting to answer the question, “How good is our answer?” Mathematically:
© 2003, Carla Ellis Self-Scaling Benchmarks Peter Chen and David Patterson, A New Approach to I/O Performance Evaluation – Self-Scaling I/O Benchmarks,
Chapter 3 System Performance and Models Introduction A system is the part of the real world under study. Composed of a set of entities interacting.
1 Common Mistakes in Performance Evaluation (1) 1.No Goals  Goals  Techniques, Metrics, Workload 2.Biased Goals  (Ex) To show that OUR system is better.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
Modeling Virtualized Environments in Simalytic ® Models by Computing Missing Service Demand Parameters CMG2009 Paper 9103, December 11, 2009 Dr. Tim R.
Evaluation Methods - Summary. How to chose a method? Stage of study – formative, iterative, summative Pros & cons Metrics – depends on what you want to.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Performance Performance
© 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final.
Workload Clustering for Increasing Energy Savings on Embedded MPSoCs S. H. K. Narayanan, O. Ozturk, M. Kandemir, M. Karakoy.
© 2006, Carla Ellis Vague idea 1. Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. Why, not just.
Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental.
Introduction Andy Wang CIS Computer Systems Performance Analysis.
Performance Computer Organization II 1 Computer Science Dept Va Tech January 2009 © McQuain & Ribbens Defining Performance Which airplane has.
© 2003, Carla Ellis Strong Inference J. Pratt Progress in science advances by excluding among alternate hypotheses. Experiments should be designed to disprove.
Test Loads Andy Wang CIS Computer Systems Performance Analysis.
CSE 340 Computer Architecture Summer 2016 Understanding Performance.
© 2003, Carla Ellis Model Vague idea “groping around” experiences Hypothesis Initial observations Experiment Data, analysis, interpretation Results & final.
Measuring Performance Based on slides by Henri Casanova.
Common Mistakes in Performance Evaluation The Art of Computer Systems Performance Analysis By Raj Jain Adel Nadjaran Toosi.
OPERATING SYSTEMS CS 3502 Fall 2017
Framework For Exploring Interconnect Level Cache Coherency
Selecting Evaluation Techniques
Software Architecture in Practice
Defining Performance Which airplane has the best performance?
Network Performance and Quality of Service
Andy Wang CIS 5930 Computer Systems Performance Analysis
ITEC 451 Network Design and Analysis
Department of Electrical & Computer Engineering
Bank-aware Dynamic Cache Partitioning for Multicore Architectures
CPU SCHEDULING.
Presentation transcript:

Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle

A Systematic Approach 1.Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. Must remain objective Be able to answer “why” as well as “what” 2.Select metrics that will help answer the questions. 3.Identify the parameters that affect behavior System parameters (e.g., HW config) Workload parameters (e.g., user request patterns) 4.Decide which parameters to study (vary).

Vague idea 1. Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle

What can go wrong at this stage? Never understanding the problem well enough to crisply articulate the goals / questions / hypothesis. Getting invested in some solution before making sure a real problem exists. Getting invested in any desired result. Not being unbiased enough to follow proper methodology. –Any biases should be working against yourself. Fishing expeditions (groping around forever). Having no goals but building apparatus for it 1 st. –Swiss Army knife of simulators?

An Example Vague idea: there should be “interesting” interactions between DVS (dynamic voltage scaling of the CPU) and memory, especially PADRAM (power-aware memory) –DVS: in soft real-time applications, slow down CPU speed and reduce supply voltage so as to just meet the deadlines. –PADRAM: when there are no memory accesses pending, transition memory chip into lower power state –Intuition: DVS will affect the length of memory idle gaps

Back of the Envelope What information do you need to know? Xscale range – 50MHz,.65V, 15mW to 1GHz, 1.75V, 2.2W Fully active mem – 300mW nap – 30mW w. 60ns extra latency E = P * t

Power Aware Memory Standby 180mW Active 300mW Power Down 3mW Nap 30mW Read/Write Transaction +6 ns ns +60 ns RDRAM Power States

Example Hypthesis: the best speed/voltage choice for DVS to minimize energy consumption when idle memory can power down is the lowest speed that is able to meet deadline (i.e., the same conclusion made by most DVS studies without memory).

CPU Energy

Execution Time

A Systematic Approach 1.Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. Must remain objective Be able to answer “why” as well as “what” 2.Select metrics that will help answer the questions. 3.Identify the parameters that affect behavior System parameters (e.g., HW config) Workload parameters (e.g., user request patterns)

Vague idea 2. Select metrics that will help answer the questions. 3. Identify the parameters that affect behavior System parameters Workload parameters “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle

What can go wrong at this stage? Wrong metrics (they don’t address the questions at hand) What everyone else uses. Easy to get. Not clear about where the “system under test” boundaries are. Unrepresentative workload. Not predictive of real usage. Just what everyone else uses (adopted blindly) – or NOT what anyone else uses (no comparison possible) Overlooking significant parameters that affect the behavior of the system.

An Example System under test: CPU and memory. Metrics: total energy used by CPU + memory, CPU energy, memory energy, execution time

Parameters Affecting Behavior Hardware parameters CPU voltage/speed settings, Processor model (e.g. in-order, out-of-order, issue width) Cache organization Number of memory chips and data layout across them Memory power state transitioning policy –Threshold values Power levels of power states Transitioning times in & out of power states. Workload: periods, miss ratio, memory access pattern

4.Decide which parameters to study (vary). 5.Select technique: Measurement of prototype implementation How invasive? Can we quantify interference of monitoring? Can we directly measure what we want? Simulation – how detailed? Validated against what? Repeatability 6.Select workload Representative? Community acceptance Availability A Systematic Approach

Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle 4.Decide which parameters to vary 5.Select technique 6.Select workload

What can go wrong at this stage? Choosing the wrong values for parameters you aren’t going to vary. Not considering the effect of other values (sensitivity analysis) Not choosing to study the parameters that matter most – factors Wrong technique Wrong level of detail

An Example Choice of workload: MediaBench applications (later iterations will use a synthetic benchmark as well in which miss ratio can be varied) Technique: simulation using SimpleScalar augmented with RDRAM memory, PowerAnalyzer Factors to study– CPU speed/voltage Comparing nap memory policy with base case

7.Run experiments How many trials? How many combinations of parameter settings? Sensitivity analysis on other parameter values. 8.Analyze and interpret data Statistics, dealing with variability, outliers 9.Data presentation 10.Where does it lead us next? New hypotheses, new questions, a new round of experiments A Systematic Approach

Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle 7.Run experiments 8.Analyze and interpret data 9.Data presentation

What can go wrong at this stage? One trial – data from a single run when variation can arise. Multiple runs – reporting average but not variability Tricks of statistics No interpretation of what the results mean. Ignoring errors and outliers Overgeneralizing conclusions – omitting assumptions and limitations of study.

Our Example

7.Run experiments How many trials? How many combinations of parameter settings? Sensitivity analysis on other parameter values. 8.Analyze and interpret data Statistics, dealing with variability, outliers 9.Data presentation 10.Where does it lead us next? New hypotheses, new questions, a new round of experiments A Systematic Approach

Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle 10. What next?

An Example New Hypothesis: There is one “best” controller policy across all different speed settings –Vary miss ratio of synthetic benchmark –Vary speed/voltage

Our Example

Criteria to compare performance –Quantifiable, measurable –Relevant to goals –Complete set reflects all possible outcomes: Successful – responsiveness (latency), productivity rate (throughput), resource utilization (%) Unsuccessful – availability (probability of failure modes) or mean time to failure Error – reliability (probability of error class) or mean time between errors Must relate to the statement of hypothesis If “System x makes programming easier” is the claim, what is a metric? Lines of code, development time? Metrics speed metrics

Common Performance Metrics (Successful Operation) Response time Utilization - %busy Throughput (requests per unit of time) MIPS, bps, TPS Request starts Request ends Service begins Service completes Response back Request starts reaction think response time load thruput nominal capacity knee usable capacity

Issues Individual vs. system-wide (global) Ideal set of metrics should have low variability, nonredundancy, completeness (all outcomes represented) Higher better (HB), lower better (LB), or nominal better (NB) Counts of events, durations (all types of time), rates (normalized to common basis),

“Good” Metrics Intuitive (linear with perceived or observed behaviors) –e.g. Double physical mem -> double page hit rate –nice not required Predictive –e.g. MIPS(A) > MIPS(B) but exectime(A) > exectime(B) Repeatable (no nondeterminism embedded in measurement) –e.g. wall clock time contains lots of junk Easy to use or measure Comparable across alternatives –e.g. MIPS on RISC vs. MIPS on CISC Unbiased or independent of alternatives –e.g. MFLOPS biased toward those processors with FL units

Means vs. Ends Metrics Means-based metrics measure what was done (counts of page faults, clock frequency) Ends-based metrics measure progress towards goal MeansEnds Clock rate Execution time

For Discussion Next Tuesday Survey the types of metrics used in your proceedings (10 papers). © 2003, Carla Ellis

Hints about Metrics Discussion Be precise Categories of metrics – e.g. performance. There are many precisely defined performance metrics. –Rates (bandwidth, IPC, power) –Durations (latency, response time, overflow) –Counts (deadlines missed, faults) Normalized data – normalized to what? Ratios can be dangerous (misleading, confusing) –Improvements (speedups) –Percentages (efficiency, utilization) –Rates (cache miss rate) Beware ratios of ratios of … What would % improvement in average miss rate mean?

Not Metrics Intuitive goals – “I know it when I see it” (like great art or the “right” behavior) – e.g. fairness, ease of programming. Analysis methods – cumulative distribution function (CDF) – ask: of what data? Presentation approach – piechart – again ask: of what data?