Lecture 10 Page 1 CS 239, Spring 2007 Experiment Designs for Categorical Parameters CS 239 Experimental Methodologies for System Software Peter Reiher.

Slides:



Advertisements
Similar presentations
Copyright 2004 David J. Lilja1 Comparing Two Alternatives Use confidence intervals for Before-and-after comparisons Noncorresponding measurements.
Advertisements

 Population multiple regression model  Data for multiple regression  Multiple linear regression model  Confidence intervals and significance tests.
BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
Copyright © 2009 Pearson Education, Inc. Chapter 29 Multiple Regression.
Design of Experiments and Analysis of Variance
ANOVA: Analysis of Variation
ANOVA: Analysis of Variance
Statistics for Managers Using Microsoft® Excel 5th Edition
Chapter 11 Analysis of Variance
Statistics Are Fun! Analysis of Variance
Statistics for Managers Using Microsoft® Excel 5th Edition
Lecture 4 Page 1 CS 239, Spring 2007 Models and Linear Regression CS 239 Experimental Methodologies for System Software Peter Reiher April 12, 2007.
k r Factorial Designs with Replications r replications of 2 k Experiments –2 k r observations. –Allows estimation of experimental errors Model:
Inferences About Process Quality
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Go to Table of ContentTable of Content Analysis of Variance: Randomized Blocks Farrokh Alemi Ph.D. Kashif Haqqi M.D.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 10-1 Chapter 10 Analysis of Variance Statistics for Managers Using Microsoft.
Chap 10-1 Analysis of Variance. Chap 10-2 Overview Analysis of Variance (ANOVA) F-test Tukey- Kramer test One-Way ANOVA Two-Way ANOVA Interaction Effects.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
For Discussion Today (when the alarm goes off) Survey your proceedings for just one paper in which factorial design has been used or, if none, one in which.
One-Factor Experiments Andy Wang CIS 5930 Computer Systems Performance Analysis.
QNT 531 Advanced Problems in Statistics and Research Methods
Analysis of Variance or ANOVA. In ANOVA, we are interested in comparing the means of different populations (usually more than 2 populations). Since this.
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
© 1998, Geoff Kuenning Linear Regression Models What is a (good) model? Estimating model parameters Allocating variation Confidence intervals for regressions.
© 1998, Geoff Kuenning General 2 k Factorial Designs Used to explain the effects of k factors, each with two alternatives or levels 2 2 factorial designs.
1 1 Slide © 2007 Thomson South-Western. All Rights Reserved OPIM 303-Lecture #9 Jose M. Cruz Assistant Professor.
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
© 2002 Prentice-Hall, Inc.Chap 9-1 Statistics for Managers Using Microsoft Excel 3 rd Edition Chapter 9 Analysis of Variance.
CHAPTER 14 MULTIPLE REGRESSION
Lecture 8 Page 1 CS 239, Spring 2007 Experiment Design CS 239 Experimental Methodologies for System Software Peter Reiher May 1, 2007.
CHAPTER 12 Analysis of Variance Tests
Chapter 10 Analysis of Variance.
ANOVA (Analysis of Variance) by Aziza Munir
TOPIC 11 Analysis of Variance. Draw Sample Populations μ 1 = μ 2 = μ 3 = μ 4 = ….. μ n Evidence to accept/reject our claim Sample mean each group, grand.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 10-1 Chapter 10 Analysis of Variance Statistics for Managers Using Microsoft.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
One-Way Analysis of Variance
Lecture 9-1 Analysis of Variance
VI. Regression Analysis A. Simple Linear Regression 1. Scatter Plots Regression analysis is best taught via an example. Pencil lead is a ceramic material.
Chapter 10: Analysis of Variance: Comparing More Than Two Means.
CPE 619 Two-Factor Full Factorial Design With Replications Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The.
CPE 619 One Factor Experiments Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama in.
Experiment Design Overview Number of factors 1 2 k levels 2:min/max n - cat num regression models2k2k repl interactions & errors 2 k-p weak interactions.
Multiple Regression I 1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 4 Multiple Regression Analysis (Part 1) Terry Dielman.
Chapter 11 Analysis of Variance. 11.1: The Completely Randomized Design: One-Way Analysis of Variance vocabulary –completely randomized –groups –factors.
Chapter 4 Analysis of Variance
Hypothesis test flow chart frequency data Measurement scale number of variables 1 basic χ 2 test (19.5) Table I χ 2 test for independence (19.9) Table.
IE241: Introduction to Design of Experiments. Last term we talked about testing the difference between two independent means. For means from a normal.
Linear Regression Models Andy Wang CIS Computer Systems Performance Analysis.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Multiple Linear Regression
CHAPTER 3 Analysis of Variance (ANOVA) PART 3 = TWO-WAY ANOVA WITH REPLICATION (FACTORIAL EXPERIMENT) MADAM SITI AISYAH ZAKARIA EQT 271 SEM /2015.
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
DSCI 346 Yamasaki Lecture 4 ANalysis Of Variance.
CHAPTER 3 Analysis of Variance (ANOVA) PART 3 = TWO-WAY ANOVA WITH REPLICATION (FACTORIAL EXPERIMENT)
ANOVA: Analysis of Variation
ANOVA: Analysis of Variation
ANOVA: Analysis of Variation
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
ANOVA: Analysis of Variation
Two-Way Analysis of Variance Chapter 11.
Basic Practice of Statistics - 5th Edition
Two-Factor Full Factorial Designs
Chapter 11 Analysis of Variance
Replicated Binary Designs
One-Factor Experiments
For Discussion Today Survey your proceedings for just one paper in which factorial design has been used or, if none, one in which it could have been used.
Presentation transcript:

Lecture 10 Page 1 CS 239, Spring 2007 Experiment Designs for Categorical Parameters CS 239 Experimental Methodologies for System Software Peter Reiher May 10, 2007

Lecture 10 Page 2 CS 239, Spring 2007 Outline Categorical parameters One factor designs Two factor full factorial designs

Lecture 10 Page 3 CS 239, Spring 2007 Categorical Parameters Some experimental parameters don’t represent a range of values They represent discrete alternatives With no necessary relationship between alternatives

Lecture 10 Page 4 CS 239, Spring 2007 Examples of Categorical Variables Different processors Different compilers Different operating systems Different denial of service defenses Different applications On/off settings for configurations

Lecture 10 Page 5 CS 239, Spring 2007 Why Different Treatment? Most models we’ve discussed imply there is a continuum of parameter values Essentially, they’re dials you can set to any value You test a few, and the model tells you what to expect at other settings

Lecture 10 Page 6 CS 239, Spring 2007 The Difference With Categorical Parameters Each is a discrete entity There is no relationship between the different members of the category There is no “in between” value –Models that suggest there are can be deceptive

Lecture 10 Page 7 CS 239, Spring 2007 Basic Differences in Categorical Models Need separate effects for each element in a category –Rather than one effect multiplied times the parameter’s value No claim for predictive value of model –Used to analyze differences in alternatives Slightly different methods of computing effects –Most other analysis techniques are similar to what we’ve seen elsewhere

Lecture 10 Page 8 CS 239, Spring 2007 One Factor Experiments If there’s only one important categorical factor But it has more than two interesting alternative –Methods work for two alternatives, but they reduce to 2 1 factorial designs If the single variable isn’t categorical, examine regression, instead Method allows multiple replications

Lecture 10 Page 9 CS 239, Spring 2007 What is This Good For? Comparing truly comparable options –Evaluating a single workload on multiple machines –Or with different options for a single component –Or single suite of programs applied to different compilers

Lecture 10 Page 10 CS 239, Spring 2007 What Isn’t This Good For? Incomparable “factors” –Such as measuring vastly different workloads on a single system Numerical factors –Because it won’t predict any untested levels

Lecture 10 Page 11 CS 239, Spring 2007 An Example One Factor Experiment You are buying VPN server to encrypt/decrypt all external messages –Everything padded to single msg size, for security purposes Four different servers are available Performance is measured in response time –Lower is better This choice could be assisted by a one- factor experiment

Lecture 10 Page 12 CS 239, Spring 2007 The Single Factor Model y ij =  j + e ij y ij is the i th response with factor at alternative j  is the mean response  j is the effect of alternative j e ij is the error term

Lecture 10 Page 13 CS 239, Spring 2007 One Factor Experiments With Replications Initially, assume r replications at each alternative of the factor Assuming a alternatives of the factor, a total of ar observations The model is thus

Lecture 10 Page 14 CS 239, Spring 2007 Sample Data for Our Example Four alternatives, with four replications each (measured in seconds) A B C D

Lecture 10 Page 15 CS 239, Spring 2007 Computing Effects We need to figure out  and  j We have the various y ij ’s So how to solve the equation? Well, the errors should add to zero And the effects should add to zero

Lecture 10 Page 16 CS 239, Spring 2007 Calculating  Since sum of errors and sum of effects are zero, And thus,  is equal to the grand mean of all responses

Lecture 10 Page 17 CS 239, Spring 2007 Calculating  for Our Example

Lecture 10 Page 18 CS 239, Spring 2007 Calculating  j  j is a vector of responses –One for each alternative of the factor To find the vector, find the column means For each j, of course We can calculate these directly from observations

Lecture 10 Page 19 CS 239, Spring 2007 Calculating a Column Mean But we also know that y ij is defined to be So,

Lecture 10 Page 20 CS 239, Spring 2007 Calculating the Parameters Remember, the sum of the errors for any given row is zero, so So we can solve for  j -

Lecture 10 Page 21 CS 239, Spring 2007 Parameters for Our Example Server A BC D Column Mean Subtract  (1.018) from column means to get parameters Parameters

Lecture 10 Page 22 CS 239, Spring 2007 Estimating Experimental Errors Estimated response is But we measured actual responses –Multiple ones per alternative So we can estimate the amount of error in the estimated response Using methods similar to those used in other types of experiment designs

Lecture 10 Page 23 CS 239, Spring 2007 Finding Sum of Squared Errors SSE estimates the variance of the errors We can calculate SSE directly from the model and observations Or indirectly from its relationship to other error terms

Lecture 10 Page 24 CS 239, Spring 2007 SSE for Our Example Calculated directly - SSE = (.96-( ))^2 + ( ( ))^ (.75- ( ))^2 + ( ( ))^ (.93 -( ))^2 =.3425

Lecture 10 Page 25 CS 239, Spring 2007 Allocating Variation To allocate variation for this model, start by squaring both sides of the model equation Cross product terms add up to zero –Why?

Lecture 10 Page 26 CS 239, Spring 2007 Variation In Sum of Squares Terms SSY=SS0+SSA+SSE Giving us another way to calculate SSE

Lecture 10 Page 27 CS 239, Spring 2007 Sum of Squares Terms for Our Example SSY = SS0 = SSA = So SSE must equal –Which is.3425 –Matching our earlier SSE calculation

Lecture 10 Page 28 CS 239, Spring 2007 Assigning Variation SST is the total variation SST = SSY - SS0 = SSA + SSE Part of the total variation comes from our model Part of the total variation comes from experimental errors A good model explains a lot of variation

Lecture 10 Page 29 CS 239, Spring 2007 Assigning Variation in Our Example SST = SSY - SS0 = SSA = SSE =.3425 Percentage of variation explained by server choice

Lecture 10 Page 30 CS 239, Spring 2007 Analysis of Variance The percentage of variation explained can be large or small Regardless of which, it may be statistically significant or insignificant To determine significance, use an ANOVA procedure –Assumes normally distributed errors...

Lecture 10 Page 31 CS 239, Spring 2007 Running an ANOVA Procedure Easiest to set up a tabular method Like method used in regression models –With slight differences Basically, determine ratio of the Mean Squared of A (the parameters) to the Mean Squared Errors Then check against F-table value for number of degrees of freedom

Lecture 10 Page 32 CS 239, Spring 2007 ANOVA Table for One-Factor Experiments Compo- Sum of Percentage of Degrees of Mean F- F- nent Squares Variation Freedom Square Comp Table y ar 1 SST=SSY-SS0 100 ar-1 A a-1 F[1-  ; a-,a(r-1)] e SSE=SST-SSA a(r-1)

Lecture 10 Page 33 CS 239, Spring 2007 ANOVA Procedure for Our Example Compo- Sum of Percentage of Degrees of Mean F- F- nent Squares Variation Freedom Square Comp Table y A e

Lecture 10 Page 34 CS 239, Spring 2007 Analysis of Our Example ANOVA Done at 90% level Since F-computed is.394, and the table entry at the 90% level with n=3 and m=12 is 2.61, –The servers are not significantly different

Lecture 10 Page 35 CS 239, Spring 2007 One-Factor Experiment Assumptions Analysis of one-factor experiments makes the usual assumptions –Effects of factor are additive –Errors are additive –Errors are independent of factor alternative –Errors are normally distributed –Errors have same variance at all alternatives How do we tell if these are correct?

Lecture 10 Page 36 CS 239, Spring 2007 Visual Diagnostic Tests Similar to the ones done before –Residuals vs. predicted response –Normal quantile-quantile plot –Perhaps residuals vs. experiment number

Lecture 10 Page 37 CS 239, Spring 2007 Residuals vs. Predicted For Our Example

Lecture 10 Page 38 CS 239, Spring 2007 Residuals vs. Predicted, Slightly Revised

Lecture 10 Page 39 CS 239, Spring 2007 What Does This Plot Tell Us? This analysis assumed the size of the errors was unrelated to the factor alternatives The plot tells us something entirely different –Vastly different spread of residuals for different factors For this reason, one factor analysis is not appropriate for this data –Compare individual alternatives, instead –Using methods discussed earlier

Lecture 10 Page 40 CS 239, Spring 2007 Could We Have Figured This Out Sooner? Yes! Look at the original data Look at the calculated parameters The model says C & D are identical Even cursory examination of the data suggests otherwise

Lecture 10 Page 41 CS 239, Spring 2007 Looking Back at the Data A B C D Parameters

Lecture 10 Page 42 CS 239, Spring 2007 Quantile-Quantile Plot for Example

Lecture 10 Page 43 CS 239, Spring 2007 What Does This Plot Tell Us? Overall, the errors are normally distributed If we only did the quantile-quantile plot, we’d think everything was fine The lesson - test all the assumptions, not just one or two

Lecture 10 Page 44 CS 239, Spring 2007 One-Factor Experiment Effects Confidence Intervals Estimated parameters are random variables –So we can compute their confidence intervals Basic method is the same as for confidence intervals on 2 k r design effects Find standard deviation of the parameters –Use that to calculate the confidence intervals

Lecture 10 Page 45 CS 239, Spring 2007 Confidence Intervals For Example Parameters s e =.158 Standard deviation of  =.040 Standard deviation of  j = % confidence interval for  = (.932, 1.10) 95% CI for    = (-.225,.074) 95% CI for    = (-.148,.151) 95% CI for    = (-.113,.186) 95% CI for    = (-.113,.186) * * * * None of these are statistically significant

Lecture 10 Page 46 CS 239, Spring 2007 Unequal Sample Sizes in One- Factor Experiments Can you evaluate a one-factor experiment in which you have different numbers of replications for alternatives? Yes, with little extra difficulty See book example for full details

Lecture 10 Page 47 CS 239, Spring 2007 Changes To Handle Unequal Sample Sizes The model is the same The effects are weighted by the number of replications for that alternative: And things related to the degrees of freedom often weighted by N (total number of experiments)

Lecture 10 Page 48 CS 239, Spring 2007 Two-Factor Full Factorial Design Without Replications Used when you have only two parameters But multiple levels for each Test all combinations of the levels of the two parameters At this point, without replicating any observations For factors A and B with a and b levels, ab experiments required

Lecture 10 Page 49 CS 239, Spring 2007 What is This Design Good For? Systems that have two important factors Factors are categorical More than two levels for at least one factor Examples - –Performance of different processors under different workloads –Characteristics of different compilers for different benchmarks –Effects of different reconciliation topologies and workloads on a replicated file system

Lecture 10 Page 50 CS 239, Spring 2007 What Isn’t This Design Good For? Systems with more than two important factors –Use general factorial design Non-categorical variables –Use regression Only two levels –Use 2 2 designs

Lecture 10 Page 51 CS 239, Spring 2007 Model For This Design y ij is the observation m is the mean response  j is the effect of factor A at level j  i is the effect of factor B at level i e ij is an error term Sums of  j ’s and  j ’s are both zero

Lecture 10 Page 52 CS 239, Spring 2007 What Are the Model’s Assumptions? Factors are additive Errors are additive Typical assumptions about errors –Distributed independently of factor levels –Normally distributed Remember to check these assumptions!

Lecture 10 Page 53 CS 239, Spring 2007 Computing the Effects Need to figure out ,  j, and  j Arrange observations in two- dimensional matrix –With b rows and a columns Compute effects such that error has zero mean –Sum of error terms across all rows and columns is zero

Lecture 10 Page 54 CS 239, Spring 2007 Two-Factor Full Factorial Example We want to expand the functionality of a file system to allow automatic compression We examine three choices - –Library substitution of file system calls –A VFS built for this purpose –UCLA stackable layers file system Using three different benchmarks –With response time as the metric

Lecture 10 Page 55 CS 239, Spring 2007 Sample Data for Our Example LibraryVFSLayers Compile Benchmark Benchmark Web Server Benchmark

Lecture 10 Page 56 CS 239, Spring 2007 Computing  Averaging the j th column, By design, the error terms add to zero Also, the  j s add to zero, so Averaging rows produces Averaging everything produces

Lecture 10 Page 57 CS 239, Spring 2007 So the Parameters Are...

Lecture 10 Page 58 CS 239, Spring 2007 Calculating Parameters for Our Example  = grand mean =  j = (-6.5, -16.3, 22.8)  i = (-264.1, , 386.9) So, for example, the model states that the benchmark using a special-purpose VFS will take seconds –Which is seconds

Lecture 10 Page 59 CS 239, Spring 2007 Estimating Experimental Errors Similar to estimation of errors in previous designs Take the difference between the model’s predictions and the observations Calculate a Sum of Squared Errors Then allocate the variation

Lecture 10 Page 60 CS 239, Spring 2007 Allocating Variation Using the same kind of procedure we’ve used on other models, SSY = SS0 + SSA + SSB + SSE SST = SSY - SS0 We can then divide the total variation between SSA, SSB, and SSE

Lecture 10 Page 61 CS 239, Spring 2007 Calculating SS0, SSA, and SSB a and b are the number of levels for the factors

Lecture 10 Page 62 CS 239, Spring 2007 Allocation of Variation For Our Example SSE = 2512 SSY = 1,858,390 SS0 = 1,149,827 SSA = 2489 SSB = 703,561 SST=708,562 Percent variation due to A -.35% Percent variation due to B % Percent variation due to errors -.35% So very little variation due to compression technology used

Lecture 10 Page 63 CS 239, Spring 2007 Analysis of Variation Again, similar to previous models –With slight modifications As before, use an ANOVA procedure –With an extra row for the second factor –And changes in degrees of freedom But the end steps are the same –Compare F-computed to F-table –Compare for each factor

Lecture 10 Page 64 CS 239, Spring 2007 Analysis of Variation for Our Example MSE = SSE/[(a-1)(b-1)]=2512/[(2)(2)]=628 MSA = SSA/(a-1) = 2489/2 = 1244 MSB = SSB/(b-1) = 703,561/2 = 351,780 F-computed for A = MSA/MSE = 1.98 F-computed for B = MSB/MSE = 560 The 95% F-table value for A & B = 6.94 A is not significant, B is –Remember, significance and importance are different things

Lecture 10 Page 65 CS 239, Spring 2007 Checking Our Results With Visual Tests As always, check if the assumptions made by this analysis are correct Using the residuals vs. predicted and quantile-quantile plots

Lecture 10 Page 66 CS 239, Spring 2007 Residuals Vs. Predicted Response for Example

Lecture 10 Page 67 CS 239, Spring 2007 What Does This Chart Tell Us? Do we or don’t we see a trend in the errors? Clearly they’re higher at the highest level of the predictors But is that alone enough to call a trend? –Perhaps not, but we should take a close look at both the factors to see if there’s a reason to look further –And take results with a grain of salt

Lecture 10 Page 68 CS 239, Spring 2007 Quantile-Quantile Plot for Example

Lecture 10 Page 69 CS 239, Spring 2007 Confidence Intervals for Effects Need to determine the standard deviation for the data as a whole From which standard deviations for the effects can be derived –Using different degrees of freedom for each Complete table in Jain, pg. 351

Lecture 10 Page 70 CS 239, Spring 2007 Standard Deviations for Our Example s e = 25 Standard deviation of  - Standard deviation of  j - Standard deviation of  i -

Lecture 10 Page 71 CS 239, Spring 2007 Calculating Confidence Intervals for Our Example Just the file system alternatives shown here At 95% level, with 4 degrees of freedom CI for library solution - (-39,26) CI for VFS solution - (-49,16) CI for layered solution - (-10,55) So none of these solutions are 95% significantly different than the mean