VLSI Systems Design—Experiments Necessary steps: Explore the problem space Design experiment(s) Carry out experiment(s) Analyze results software packages: R, Matlab, … Report results
Example: design a “better” transistor What do we mean by “better”? What FACTORS influence design? --fabrication --design --environmental For which of these is there random variation?
Which “factors” do we want to investigate?
SUMMARY—15 IMPORTANT POINTS FOR EXPERIMENTERS: 1. Even careful experimentation and observation may miss important facts; new experiments may cause old conclusions to be discarded; EXPERIMENTS ARE NOT PROOFS. 2. It is just as important to report NEGATIVE results as to report POSITIVE results. The experimenter must always accurately record and thoroughly report ALL results. 3. IGNORING IMPORTANT FACTORS CAN LEAD TO ERRONEOUS CONCLUSIONS, SOMETIMES WITH TRAGIC RESULTS. 4. YOUR RESULTS ARE ONLY VALID FOR THE PART OF THE DATA-TREATMENT SPACE YOU HAVE EXPLORED; YOU CANNOT CLAIM KNOWLEDGE OF WHAT YOU HAVE NOT EXPLORED 5. An experiment is worthless unless it can be REPEATED by other researchers using the same experimental setup; experimenters have a duty to the research community to report enough about their experiment and data so that other researchers can verify their claims 6. YOU ONLY GET ANSWERS TO THE QUESTIONS YOU ASK 7. if your are going to use a (pseudo-)RANDOM NUMBER GENERATOR, make sure the output behaves enough like a sequence of TRUE RANDOM NUMBERS 8. An experiment must be repeated a SUFFICIENT NUMBER OF TIMES for the results to be attributed to more than random error 9. Choosing the CORRECT MEASURE for the question you are asking is an important part of the experimental design 10. Reporting CORRECT results, PROPERLY DISPLAYED, is an integral part of a well-done experiment 11. MISUSE OF GRAPH LABELING can lead to MISLEADING RESULTS AND INCORRECT CONCLUSIONS 12. INTERPOLATING your results to regions you have not explored can lead to INCORRECT CONCLUSIONS 13. IGNORING the “NULL HYPOTHESIS” when reporting your results can be very misleading 14. Don’t mistake CORRELATION for DEPENDENCE 15. Justify your choice of CURVE using VALID STATISTICS, not “appearance”
Topics Analyzing and Displaying Data –Simple Statistical Analysis –Comparing Results –Curve Fitting Designing Experiments: Factorial Designs –2 K Designs Including Replications –Full Factorial Designs Ensuring Data Meets Analysis Criteria Presenting Your Results; Drawing Conclusions
Example: A System System (“Black Box”) System Inputs System Outputs Factors (Experimental Conditions) Responses (Experimental Results)
Experimental Research Define System Define System Identify Factors and Levels Identify Factors and Levels Identify Response(s) Identify Response(s) ● Define system outputs first ● Then define system inputs ● Finally, define behavior (i.e., transfer function) ● Identify system parameters that vary (many) ● Reduce parameters to important factors (few) ● Identify values (i.e., levels) for each factor ● Identify time, space, etc. effects of interest Design Experiments Design Experiments ● Identify factor-level experiments
Create and Execute System; Analyze Data Define Workload Define Workload Create System Create System Execute System Execute System ● Workload can be a factor (but often isn't) ● Workloads are inputs that are applied to system ● Create system so it can be executed ● Real prototype ● Simulation model ● Empirical equations ● Execute system for each factor-level binding ● Collect and archive response data Analyze & Display Data Analyze & Display Data ● Analyze data according to experiment design ● Evaluate raw and analyzed data for errors ● Display raw and analyzed data to draw conclusions
Some Examples Analog Simulation –Which of three solvers is best? –What is the system? –Responses Fastest simulation time Most accurate result Most robust to types of circuits being simulated –Factors Solver Type of circuit model Matrix data structure Epitaxial growth –New method using non- linear temp profile –What is the system? –Responses Total time Quality of layer Total energy required Maximum layer thickness –Factors Temperature profile Oxygen density Initial temperature Ambient temperature
Basic Descriptive Statistics for a Random Sample X Mean Median Mode Variance / standard deviation Z scores: Z = (X – mean)/ (standard deviation) Quartiles, box plots Q-Q plot Note: these can be deceptive. For example, if P (X = 0) = P(X = 100) = 0.5 and P (Y = 50 ) = 1, Then X and Y have the same mean (and nastier examples can be constructed) home.oise.utoronto.ca/~thollenstein/Exploratory%20Data%20Analysis.ppt
SIMPLE MODELS OF DATA Ms. #Latency Data file “latency.dat” Example: Evaluation of a new wireless network protocol System: wireless network with new protocol Workload: 10 messages applied at single source Each message identical configuration Experiment output: Roundtrip latency per message (ms) Mean: 19.6 ms Variance: ms 2 Std Dev: 3.27 ms Hypothesis: Distribution is N( 2 )
Verify Model Preconditions Check randomness Use plot of residuals around mean Residuals “appear” random Check normal distribution Use quantile-quantile (Q-Q) plot Pattern adheres consistently along ideal quantile-quantile line
Confidence Intervals Sample mean vs Population mean If many samples are collected, about 1 - will contain the “true mean” CI: > 30 samples CI: < 30 samples For the latency data, = 10, a = 0.05: (17.26, 21.94) Raj Jain, “The Art of Computer Systems Performance Analysis,” Wiley, 1991.
Scatter and Line Plots DepthResistance Resistance profile of doped silicon epitaxial layer Expect linear resistance increase as depth increases
Linear Regression Statistics (hypothesis: resistance = 0 + 1 *depth + error) model = lm(Resistance ~ Depth) summary(model) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) Depth e-07 *** --- Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: on 8 degrees of freedom “variance of error: (1.118) 2 ” Multiple R-Squared: , Adjusted R-squared: F-statistic: on 1 and 8 DF, p-value: 1.249e-07 “evidence this estimate valid” “reject hypotheses 0 = 0, 1 = 0” (Using R system; based on
Validating Residuals Errors are marginally normally distributed due to “tails”
Comparing Two Sets of Data Example: Consider two different wireless access points. Which one is faster? Inputs: same set of 10 messages communicated through both access points. Response (usecs): Latency1 Latency Approach: Take difference of data and determine CI of difference. If CI straddles zero, cannot tell which access point is faster. CI 95 % = (-1.27, 2.87) usecs Confidence interval straddles zero. Thus, cannot determine which is faster with 95% confidence
Plots with error bars Execution time of SuperLU linear system solution on parallel computer Ax = b For each p, ran problem multiple times with same matrix size but different values Determined mean and CI for each p to obtain curve and error intervals Matrix density p
Curve Fitting > model = lm(t ~ poly(p,4)) > summary(model) Call: lm(formula = t ~ poly(p, 4)) Residuals: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) e-10 *** poly(p, 4) e-10 *** poly(p, 4) e-08 *** poly(p, 4) e-05 *** poly(p, 4) Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: on 4 degrees of freedom Multiple R-Squared: 1, Adjusted R-squared: F-statistic: 2.38e+04 on 4 and 4 DF, p-value: 5.297e-09
Model Validation: y’ = ax + b R 2 – Coefficient of Determination “How well does the data fit your model?” What proportion of the “variability” is accounted for by the statistical model? (what is ratio of explained variation to total variation?) Suppose we have measurements y 1, y 2, …, y n with mean m And predicted values y 1 ’, y 2 ’, …, y n ’ (y i ’ = ax i + b = y i + e i ) SSE = sum of squared errors = ∑ (y i – y i ’) 2 = ∑e i 2 SST = total sum of squares =∑ (y i – m) 2 SSR = SST – SSE = residual sum of squares = ∑ (m – y i ’) 2 R 2 = SSR/SST = (SST-SSE)/SST R 2 is a measure of how good the model is. The closer R 2 is to 1 the better. Example: Let SST = 1499 and SSE = 97. Then R 2 = 93.5%
Using the t-test extra group Consider the following data (“sleep.R”) From “Introduction to R”,
T.test result > t.test(extra ~ group, data = sleep) Welch Two Sample t-test data: extra by group t = , df = , p-value = alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: sample estimates: mean of x mean of y p-value is smallest 1- confidence where null hypothesis. not true. p-value = means difference not 0 above 92%
Factorial Design What “factors” need to be taken into account? How do we design an efficient experiment to test all these factors? How much do the factors and the interactions among the factors contribute to the variation in results? Example: 3 factors a,b,c, each with 2 values: 8 combinations But what if we want random order of experiments? What if each of a,b,c has 3 values? Do we need to run all experiments?
Standard Procedure-Full Factorial Design (Example) Variables A,B,C: each with 3 values, Low, Medium, High (coded as -1,0,1) “Signs Table”: ABC Run the experiments in the table (“2 level, full factorial design”) 2.Repeat the experiments in this order n times by using rows 1,…,8,1,…,8, … (“replication”) 3.Use step 2, but choose the rows randomly (“randomization”) 4.Use step 4, but add some “center point runs”, for example, run the case 0,0,0, then use 8 rows, then run 0,0,0, …finish with a 0,0,0 case In general, for 5 or more factors, use a “fractional factorial design”
2 k Factorial Design Example: k = 2, factors are A,B, and X’s are computed from the signs table: y = q 0 + q A x A + q B x B + q AB x AB SST = total variation around the mean = ∑ (y i – mean) 2 = SSA+SSB+SSAB where SSA = 2 2 q A 2 (variation allocated to A), and SSB, SSAB are defined similarly Note: var(y) = SST/( 2 k – 1) Fraction of variation explained by A = SSA/SST
Example: 2 k Design Are all factors needed? If a factor has little effect on the variability of the output, why study it further? Method? a. Evaluate variation for each factor using only two levels each b. Must consider interactions as well Interaction: effect of a factor dependent on the levels of another L K C Misses 32 4 mux mux mux mux 32 4 lin lin lin lin Factor Levels Line Length (L) 32, 512 words No. Sections (K) 4, 16 sections Control Method (C) multiplexed, linear Experiment Design Cache Address Trace Misses L K C Misses Encoded Experiment Design /ex-design/ex-design/ExChapter6.ppt
I L K C LK LC KC LKC Miss.Rate (y j ) Analyze Results (Sign Table) q i : = 1/ ∑ (sign i *Response i ) SSL = 2 3 q 2 L = 800 SST = SSL+SSK+SSC+SSLK+SSLC+SSKC+SSLKC = = 4512 %variation(L) = SSL/SST = 800/4512 = 17.7% L K C Misses Obtain Reponses Example: 2 k Design (continued) Ex: y 1 = 14 = q 0 – q L –q K –q C + q LK + q LC + q KC – q LKC Solve for q’s
Full Factorial Design Model: y ij = m+a i + b j + e ij Effects computed such that ∑a i = 0 and ∑b j = 0 m = mean(y..) a j = mean(y. j ) – m bi = mean(y i.) – m Experimental Errors SSE = e i 2 j SS0 = abm 2 SSA= b∑a 2 SSB= a∑b 2 SST = SS0+SSA+SSB+SSE
Full-Factorial Design Example Determination of the speed of light Morley Experiments Factors: Experiment No. (Expt) Run No. (Run) Levels: Expt – 5 experiments Run – 20 repeated runs Expt Run Speed
Box Plots of Factors
Two-Factor Full Factorial > fm <- aov(Speed~Run+Expt, data=mm) # Determine ANOVA > summary(fm) # Display ANOVA of factors Df Sum Sq Mean Sq F value Pr(>F) Run Expt ** Residuals Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Conclusion: Data across experiments has acceptably small variation, but variation within runs is significant
Visualizing Results: Tufte’s Principles Have a properly chosen format and design Use words, numbers, and drawing together Reflect a balance, a proportion, a sense of relevant scale Display an accessible complexity of detail Have a story to tell about the data Draw in a professional manner Avoid content-free decoration, including “chart junk”
Back to the transistor: What factors are there? Which ones do we want to investigate? How should we define our experiments? What role will randomness play? (simulation/actual) How should we report the results?