Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final.

Similar presentations


Presentation on theme: "© 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final."— Presentation transcript:

1 © 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle

2 © 2003, Carla Ellis A Systematic Approach 1.Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. Must remain objective Be able to answer “why” as well as “what” 2.Select metrics that will help answer the questions. 3.Identify the parameters that affect behavior System parameters (e.g., HW config) Workload parameters (e.g., user request patterns) 4.Decide which parameters to study (vary).

3 © 2003, Carla Ellis Vague idea 1. Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle

4 © 2003, Carla Ellis An Example Vague idea: there should be “interesting” interactions between DVS (dynamic voltage scaling of the CPU) and PADRAM (power- aware memory) –DVS: in soft real-time applications, slow down CPU speed and reduce supply voltage so as to just meet the deadlines. –PADRAM: when there are no memory accesses pending, transition memory chip into lower power state –Intuition: DVS will affect the length of memory idle gaps

5 © 2003, Carla Ellis Back of the Envelope What information do you need to know? Xscale range – 50MHz,.65V, 15mW to 1GHz, 1.75V, 2.2W Fully active mem – 300mW nap – 30mW w. 60ns extra latency E = P * t

6 © 2003, Carla Ellis Power Aware Memory Standby 180mW Active 300mW Power Down 3mW Nap 30mW Read/Write Transaction +6 ns +6000 ns +60 ns RDRAM Power States

7 © 2003, Carla Ellis Example Hypothesis: the best speed/voltage choice for DVS to minimize energy consumption when idle memory can power down is not necessarily the lowest speed that is able to meet deadline – counter to the assumption made by most DVS studies.

8 © 2003, Carla Ellis Example Restate hypothesis to disprove: the best speed/voltage choice for DVS to minimize energy consumption when idle memory can power down is still the lowest speed that is able to meet deadline – the assumption made by most DVS studies.

9 © 2003, Carla Ellis What can go wrong at this stage? Never understanding the problem well enough to crisply articulate the goals / questions / hypothesis. Getting invested in some solution before making sure a real problem exists. Getting invested in any desired result. Not being unbiased enough to follow proper methodology. Fishing expeditions (groping around forever). Having no goals but building apparatus for it 1 st.

10 © 2003, Carla Ellis A Systematic Approach 1.Understand the problem, frame the questions, articulate the goals. A problem well-stated is half-solved. Must remain objective Be able to answer “why” as well as “what” 2.Select metrics that will help answer the questions. 3.Identify the parameters that affect behavior System parameters (e.g., HW config) Workload parameters (e.g., user request patterns)

11 © 2003, Carla Ellis Vague idea 2. Select metrics that will help answer the questions. 3. Identify the parameters that affect behavior System parameters Workload parameters “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle

12 © 2003, Carla Ellis An Example System under test: CPU and memory. Metrics: total energy used by CPU + memory, CPU energy, memory energy, leakage, execution time, ave. memory gap

13 © 2003, Carla Ellis Parameters Affecting Behavior Hardware parameters CPU voltage/speed settings, Processor model (e.g. in-order, out-of-order, issue width) Cache organization Number of memory chips and data layout across them Memory power state transitioning policy –Threshold values Power levels of power states Transitioning times in & out of power states. Workload: periods, miss ratio, memory access pattern

14 © 2003, Carla Ellis What can go wrong at this stage? Wrong metrics (they don’t address the questions at hand) What everyone else uses. Easy to get. Not clear about where the “system under test” boundaries are. Unrepresentative workload. Not predictive of real usage. Just what everyone else uses (adopted blindly) – or NOT what anyone else uses (no comparison possible) Overlooking significant parameters that affect the behavior of the system.

15 © 2003, Carla Ellis 4.Decide which parameters to study (vary). 5.Select technique: Measurement of prototype implementation How invasive? Can we quantify interference of monitoring? Can we directly measure what we want? Simulation – how detailed? Validated against what? Repeatability 6.Select workload Representative? Community acceptance Availability A Systematic Approach

16 © 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle 4.Decide which parameters to vary 5.Select technique 6.Select workload

17 © 2003, Carla Ellis An Example Choice of workload: MediaBench applications (later iterations will use a synthetic benchmark as well in which miss ratio can be varied) Technique: simulation using SimpleScalar augmented with RDRAM memory, PowerAnalyzer Factors to study– CPU speed/voltage Comparing nap memory policy with base case

18 © 2003, Carla Ellis What can go wrong at this stage? Choosing the wrong values for parameters you aren’t going to vary. Not considering the effect of other values (sensitivity analysis) Not choosing to study the parameters that matter most – factors Wrong technique Wrong level of detail

19 © 2003, Carla Ellis 7.Run experiments How many trials? How many combinations of parameter settings? Sensitivity analysis on other parameter values. 8.Analyze and interpret data Statistics, dealing with variability, outliers 9.Data presentation 10.Where does it lead us next? New hypotheses, new questions, a new round of experiments A Systematic Approach

20 © 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle 7.Run experiments 8.Analyze and interpret data 9.Data presentation

21 © 2003, Carla Ellis An Example

22 © 2003, Carla Ellis What can go wrong at this stage? One trial – data from a single run when variation can arise. Multiple runs – reporting average but not variability Tricks of statistics No interpretation of what the results mean. Ignoring errors and outliers Overgeneralizing conclusions – omitting assumptions and limitations of study.

23 © 2003, Carla Ellis 7.Run experiments How many trials? How many combinations of parameter settings? Sensitivity analysis on other parameter values. 8.Analyze and interpret data Statistics, dealing with variability, outliers 9.Data presentation 10.Where does it lead us next? New hypotheses, new questions, a new round of experiments A Systematic Approach

24 © 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final Presentation Experimental Lifecycle 10. What next?

25 © 2003, Carla Ellis An Example New Hypothesis: different controller policies are appropriate at different speed settings –Vary miss ratio of synthetic benchmark –Vary speed/voltage

26 © 2003, Carla Ellis Metrics Criteria to compare performance –Quantifiable, measureable –Relevant to goals –Complete set reflects all possible outcomes: Successful – responsiveness, productivity rate (throughput), resource utilization Unsuccessful – availability (probability of failure mode) or mean time to failure Error – reliability (probability of error class) or mean time between errors

27 © 2003, Carla Ellis Common Performance Metrics (Successful Operation) Response time Throughput (requests per unit of time) MIPS, bps, TPS Request starts Request ends Service begins Service completes Response back Request starts reaction think response time load thruput nominal capacity knee usable capacity

28 © 2003, Carla Ellis Discussion: Sampling of Metrics from Literature

29 © 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Initial observations Discussion Next Time: Destination Initial Hypothesis Pre-proposal 1: Sketch out what information you would need to collect (or have already gathered) in a “groping around” phase to get from a vague idea to the hypothesis stage for your planned project


Download ppt "© 2003, Carla Ellis Vague idea “groping around” experiences Hypothesis Model Initial observations Experiment Data, analysis, interpretation Results & final."

Similar presentations


Ads by Google