MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #18 3/6/02 Taguchi’s Orthogonal Arrays.

Slides:



Advertisements
Similar presentations
ANOVA Demo Part 2: Analysis Psy 320 Cal State Northridge Andrew Ainsworth PhD.
Advertisements

DFT/FFT and Wavelets ● Additive Synthesis demonstration (wave addition) ● Standard Definitions ● Computing the DFT and FFT ● Sine and cosine wave multiplication.
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 12: Analysis of Variance: Differences among Means of Three or More Groups.
Lecture 2 Describing Data II ©. Summarizing and Describing Data Frequency distribution and the shape of the distribution Frequency distribution and the.
Regression Analysis Using Excel. Econometrics Econometrics is simply the statistical analysis of economic phenomena Here, we just summarize some of the.
Understanding the General Linear Model
The Simple Linear Regression Model: Specification and Estimation
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #19 3/8/02 Taguchi’s Orthogonal Arrays.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
1 Multiple Regression Here we add more independent variables to the regression. In this section I focus on sections 13.1, 13.2 and 13.4.
MAE 552 Heuristic Optimization
Evaluating Hypotheses
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #16 3/1/02 Taguchi’s Orthogonal Arrays.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #20 3/10/02 Taguchi’s Orthogonal Arrays.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
2.3. Measures of Dispersion (Variation):
Experimental Evaluation
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive.
Bootstrapping applied to t-tests
Separate multivariate observations
Example of Simple and Multiple Regression
Descriptive Methods in Regression and Correlation
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Linear Regression and Correlation
Lecture 1 Signals in the Time and Frequency Domains
Analysis of Variance or ANOVA. In ANOVA, we are interested in comparing the means of different populations (usually more than 2 populations). Since this.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2012 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Factorial Design of Experiments Kevin Leyton-Brown.
Ch4 Describing Relationships Between Variables. Pressure.
Measures of Variability In addition to knowing where the center of the distribution is, it is often helpful to know the degree to which individual values.
Testing Hypotheses about Differences among Several Means.
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
Figure: Figure: When terminal velocity is reached, vs is constant (vterm), and Fnet = 0: If Re is less than ~100, CD  24/Re, and:
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistical analysis Outline that error bars are a graphical representation of the variability of data. The knowledge that any individual measurement.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
MSE-415: B. Hawrylo Chapter 13 – Robust Design What is robust design/process/product?: A robust product (process) is one that performs as intended even.
Design Of Experiments With Several Factors
Section 10.1 Confidence Intervals
Chapter 12: Analysis of Variance. Chapter Goals Test a hypothesis about several means. Consider the analysis of variance technique (ANOVA). Restrict the.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
11/23/2015Slide 1 Using a combination of tables and plots from SPSS plus spreadsheets from Excel, we will show the linkage between correlation and linear.
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Simple Linear Regression In the previous lectures, we only focus on one random variable. In many applications, we often work with a pair of variables.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
1 The Decomposition of a House Price index into Land and Structures Components: A Hedonic Regression Approach by W. Erwin Diewert, Jan de Haan and Rens.
Applied Quantitative Analysis and Practices LECTURE#31 By Dr. Osman Sadiq Paracha.
Correlation & Regression Analysis
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
1 G Lect 13a G Lecture 13a Power for 2 way ANOVA Random effects Expected MS Mixed models Special case: one entry per cell.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Principal Component Analysis (PCA)
Variability Introduction to Statistics Chapter 4 Jan 22, 2009 Class #4.
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Fourier Approximation Related Matters Concerning Fourier Series.
Lecture 8: Ordinary Least Squares Estimation BUEC 333 Summer 2009 Simon Woodcock.
ANalysis Of VAriance can be used to test for the equality of three or more population means. H 0 :  1  =  2  =  3  = ... =  k H a : Not all population.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 10: Comparing Models.
LECTURE 11: Advanced Discriminant Analysis
i) Two way ANOVA without replication
CHAPTER 29: Multiple Regression*
Chapter 13 Group Differences
Parametric Methods Berlin Chen, 2005 References:
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #18 3/6/02 Taguchi’s Orthogonal Arrays

Additive Model

So from the previous result, we see that: m A3 = m + a 3 + some error term This is sensible enough, recall that our representation of a i was originally m Ai – m. So m A3 as we have derived it here, is an estimate of m + a 3 and thus our approach was in fact the use of an additive model.

Additive Model A note on the error term. We have shown that the error is actually the average of 3 error terms (one corresponding to each experiment). We typically treat each of the individual experiment error terms as having zero mean and some variance.

Additive Model Consider the expression to be the average variance of the 3 experiment error terms in our estimate of m A3. The error variance of our estimate of m A3 is then approximately (1/3). So this represents an approximate 3-fold reduction in error over a single experiment with factor A at level 3.

Additive Model Define Replication Number n r : –The replication number is the number of times a particular factor level is repeated in an orthogonal array. It can be shown that the error variance of the average effect of a factor level is smaller than the error variance of a single experiment by a factor equal to its n r.

Additive Model So, to obtain the same accuracy in our factor level averages using a one-at-a-time approach, we would have to conduct 3 experiments at each of the 3 levels of each factor. So in our example, if we choose temp to be our first factor to vary (holding all others constant), we would need to conduct:

Additive Model E(A 1, B 1, C 1, D 1 )3 times; E(A 2, B 1, C 1, D 1 )3 times and; E(A 3, B 1, C 1, D 1 )3 times; For a total of 9 experiments. We would then hold A constant at its best level and vary B for example. We would then need to conduct:

Additive Model E(A *, B 1, C 1, D 1 )0 times; E(A *, B 2, C 1, D 1 )3 times and; E(A *, B 3, C 1, D 1 )3 times; For a total of 6 additional experiments. We would then hold A and B constant at their best level and vary C for example. We would then need to conduct:

Additive Model E(A *, B *, C 1, D 1 )0 times; E(A *, B *, C 2, D 1 )3 times and; E(A *, B *, C 2, D 1 )3 times; For a total of 6 additional experiments. We would then hold A, B, and C constant at their best level and vary D. We would then need to conduct:

Additive Model E(A *, B *, C *, D 1 )0 times; E(A *, B *, C *, D 2 )3 times and; E(A *, B *, C *, D 3 )3 times; For a total of 6 additional experiments. The grand total is then = 27 experiments.

Statistical Notes A statistical advantage of using Orthogonal arrays is that if the errors (e i ) are independent with zero mean and equal variance, then the estimated factor effects are mutually uncorrelated and we can determine the best level of each factor separately. You must conduct all experiments in the array or your approach will not be balanced, which is an essential property (we will lose orthogonality).

ANOVA We can get a general idea about the degree to which each factor effects the performance of our product/process by looking at the level means. This was suggested by our use of the factor effects in predicting observation values.

ANOVA An even better approach is to use a process called decomposition of variance, or commonly analysis of variance (ANOVA). ANOVA will also provide us with the means to estimate error variance for the factor effects and the variance of the error in a prediction.

ANOVA As a point of interest, we will discuss the analogy between ANOVA and Fourier analysis. Fourier analysis can be used to determine the relative strengths of the many harmonics that typically exist in an electrical signal. The large the amplitude of a harmonic, the more important it is to the overall signal.

ANOVA ANOVA is used to determine the relative importance of the various factors effecting a product or process. The analogy is as follows: –The obs. values are like the observed signal. –The sum-of-squared obs. values is like the power of the signal. –The overall mean is like the dc portion of the signal. –The factors are like harmonics –The harmonics of the signal are orthogonal as are the columns in our matrix.

ANOVA We’ll talk more about this analogy as we progress. For now, we need to define a few terms for use in our analysis of variance.

ANOVA Grand total sum-of-squares: –The sum the squares of all the observation values. (Is is the GTSOS that is analogous to the total power of our electrical signal.)

ANOVA Drawing our analogy to Fourier: –The sum of squares due to mean is analogous to the dc power of our signal and –The total sum of squares is analogous to the ac power of our signal.

ANOVA Grand total sum-of-squares: –The sum the squares of all the observation values. (Is is the GTSOS that is analogous to the total power of our electrical signal.)

ANOVA The GTSOS can be decomposed into 2 parts: –The sum of squares due to mean –The total sum of squares

ANOVA Drawing our analogy to Fourier: –The sum of squares due to mean is analogous to the dc power of our signal and –The total sum of squares is analogous to the ac power of our signal.

ANOVA Because m is the mean of the observation values, the total sum of squares can be written as: The derivation is shown on the following slide

ANOVA

So we can see from our previous results that the GTSOS is simply the sum of the SOSDTM and the TSOS as follows: In Fourier, the above result is analogous to the fact that the ac power of the signal is equal to the total power minus the dc power.

ANOVA What we need is the sum of squares due to each factor. These values will be a measure of the relative importance of the factors in changing the values of η. They are analogous to the relative strengths of our harmonics in our Fourier analysis.

ANOVA What we need is the sum of squares due to each factor. These values will be a measure of the relative importance of the factors in changing the values of η. They are analogous to the relative strengths of our harmonics in our Fourier analysis. They can be computed as follows:

ANOVA Sum of squares due to factor A Where (recall that) n r is the replication number and l is the number of levels.

ANOVA Lets go through and compute/look at these values for our example. 1 st notice that all of our replication numbers are 3. Our grand total sum of squares is:

ANOVA Our sum of squares due to the mean is: Our total sum of squares is:

ANOVA The sum of squares due to Factor A is:

ANOVA Using the same approach as for factor A, we can show that the remaining factor sum of squares are: For B:950(dB) 2 For C:350(dB) 2 For D: 50(dB) 2

ANOVA So from these calculations, we can determine exactly what percentage of the variation in η is due to each of the factors. This value is given by dividing the sum of squares for the factor by the total sum of squares. So we can say that: (2450/3800)*100 = 64.5% Of the variation in η is caused by factor A.

ANOVA Likewise: B:(950/3800)*100 = 25% C:(350/3800)*100 = 9.2% D:( 50/3800)*100 = 1.3% So we see that together, C and D account for only 10.5% of the variation in η.