Simulation Modeling and Analysis Session 12 Comparing Alternative System Designs.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Advertisements

Test of (µ 1 – µ 2 ),  1 =  2, Populations Normal Test Statistic and df = n 1 + n 2 – 2 2– )1– 2 ( 2 1 )1– 1 ( 2 where ] 2 – 1 [–
3.3 Hypothesis Testing in Multiple Linear Regression
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Inference for Regression
Experimental Design, Response Surface Analysis, and Optimization
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Ch11 Curve Fitting Dr. Deshi Ye
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Chapter 10 Simple Regression.
Additional Topics in Regression Analysis
Chapter 3 Experiments with a Single Factor: The Analysis of Variance
Lecture 9: One Way ANOVA Between Subjects
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Chapter 11 Multiple Regression.
13-1 Designing Engineering Experiments Every experiment involves a sequence of activities: Conjecture – the original hypothesis that motivates the.
Lecture 12 One-way Analysis of Variance (Chapter 15.2)
Inferences About Process Quality
1 Simulation Modeling and Analysis Output Analysis.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
Simple Linear Regression and Correlation
Introduction to Linear Regression and Correlation Analysis
Correlation and Linear Regression
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
One-Way Analysis of Variance Comparing means of more than 2 independent samples 1.
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
Chapter 10 Analysis of Variance.
Psychology 301 Chapters & Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons.
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
6-1 Introduction To Empirical Models Based on the scatter diagram, it is probably reasonable to assume that the mean of the random variable Y is.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005 Dr. John Lipp Copyright © Dr. John Lipp.
ANOVA Assumptions 1.Normality (sampling distribution of the mean) 2.Homogeneity of Variance 3.Independence of Observations - reason for random assignment.
Analysis of Variance (ANOVA) Brian Healy, PhD BIO203.
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
Chapter 8 1-Way Analysis of Variance - Completely Randomized Design.
Environmental Modeling Basic Testing Methods - Statistics III.
Chapter 12 Comparison and Evaluation of Alternative System Designs Banks, Carson, Nelson & Nicol Discrete-Event System Simulation.
28. Multiple regression The Practice of Statistics in the Life Sciences Second Edition.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Multiple Regression Chapter 14.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Four ANALYSIS AND PRESENTATION OF DATA.
Six Easy Steps for an ANOVA 1) State the hypothesis 2) Find the F-critical value 3) Calculate the F-value 4) Decision 5) Create the summary table 6) Put.
Inference about the slope parameter and correlation
Chapter 13 Simple Linear Regression
Chapter 14 Introduction to Multiple Regression
Chapter 4 Basic Estimation Techniques
Basic Estimation Techniques
Basic Estimation Techniques
CHAPTER 29: Multiple Regression*
6-1 Introduction To Empirical Models
Simple Linear Regression
Chapter 12 Comparison and Evaluation of Alternative System Designs
I. Statistical Tests: Why do we use them? What do they involve?
Simple Linear Regression
Presentation transcript:

Simulation Modeling and Analysis Session 12 Comparing Alternative System Designs

Outline Comparing Two Designs Comparing Several Designs Statistical Models Metamodeling

Comparing two designs Let the average measures of performance for designs 1 and 2 be  1 and  2. Goal of the comparison: Find point and interval estimates for  1 -  2

Example Auto inspection system design Arrivals: E(6.316) min Service: –Brake check N(6.5,0.5) min –Headlight check N(6,0.5) min –Steering check N(5.5,0.5) min Two alternatives: –Same service person does all checks –A service person is devoted to each check

Comparing Two Designs -contd Run length (i th design ) = T ei Number of replications (i th design ) = R i Average response time for replication r (i th design = Y ri Averages and standard deviations over all replications, Y 1 * =  Y ri / R i and Y 2 *, are unbiased estimators of  1 and  2.

Possible outcomes Confidence interval for  1 -  2 well to the left of zero. I.e. most likely  1 <  2. Confidence interval for  1 -  2 well to the right of zero. I.e. most likely  1 >  2. Confidence interval for  1 -  2 contains zero. I.e. most likely  1 ~  2. Confidence interval (Y 1 * - Y 2 *) ± t  /2, s.e.(Y 1 * - Y 2 *)

Independent Sampling with Equal Variances Different and independent random number streams are used to simulate the two designs. Var(Y i *) = var(Y ri )/R i =  i 2 /R i Var(Y 1 * - Y 2 *) = var(Y 1 *) + var(Y 2 *) =  1 2 /R 1 +  2 2 /R 2 = V IND Assume the run lengths can be adjusted to produce  1 2 ~  2 2

Independent Sampling with Equal Variances -contd Then Y 1 * - Y 2 * is a point estimate of  1 -  2 S i 2 =  (Y ri - Y i *) 2 /(R i - 1) S p 2 = [(R 1 -1) S (R 2 -1) S 2 2 ]/(R 1 +R 2 -2) s.e.(Y 1 *-Y 2 *) = S p (1/R 1 + 1/R 2 ) 1/2 = R 1 + R 2 -2

Independent Sampling with Unequal Variances s.e.(Y 1 *-Y 2 *) = (S 1 2 /R1 + S 2 2 /R 2 ) 1/2 = (S 1 2 /R 1 + S 2 2 /R 2 ) 2 /M where M = (S 1 2 /R 1 ) 2 /(R 1 -1) + (S 2 2 /R 2 ) 2 /(R 2 -1) Here R 1 and R 2 must be > 6

Correlated Sampling Correlated sampling induces positive correlation between Y r1 and Y r2 and reduces the variance in the point estimator of Y 1 *-Y 2 * Same random number streams used for both systems for each replication r (R 1 = R 2 = R) Estimates Y r1 and Y r2 are correlated but Y r1 and Y s2 (r n.e. s) are mutually independent.

Recall: Covariance var(Y 1 * - Y 2 *) = var(Y 1 * ) + var(Y 2 * ) - 2 cov(Y 1 *, Y 2 * ) =  1 2 /R +  2 2 /R - 2  12  1  2 /R = V CORR = V IND - 2  12  1  2 /R Recall: definition of covariance cov(X1,X2) = E(X1 X2) -  1  2 = = corr(X1 X2)  1  2 = =  1  2

Correlated Sampling -contd Let D r =Y r1 - Y r2 D* = (1/R)  D r = Y 1 * - Y 2 * S D 2 = (1/(R - 1))  (D r - D*) 2 Standard error for the 100(1-  )% confidence interval s.e.(D*) = s.e.(Y 1 * - Y 2 * ) = S D /  R (Y 1 * - Y 2 *) ± t  /2, S D /  R

Correlated Sampling -contd Random Number Synchronization Guides –Dedicate a r.n. stream for a specific purpose and use as many streams as needed. Assign independent seeds to each stream at the beginning of each run. –For cyclic task subsystems assign a r.n. stream. –If synchronization is not possible for a subsystem use an independent stream.

Example: Auto inspection A n = interarrival time for vehicles n,n+1 S n (1) = brake inspection time for vehicle n in model 1 S n (2) = headlight inspection time for vehicle n in model 1 S n (3) = steering inspection time for vehicle n in model 1 Select R = 10, Total_time = 16 hrs

Example: Auto inspection Independent runs <  1 -  2 < 7.3 Correlated runs <  1 -  2 < 8.5 Synchronized runs -0.5 <  1 -  2 < 1.3

Confidence Intervals with Specified Precision Here the problem is to determine the number of replications R required to achieve a desired level of precision  in the confidence interval, based on results obtained using Ro replications R = (t  /2,Ro  S D /  

Comparing Several System Designs Consider K alternative designs Performance measure  i Procedures –Fixed sample size –Sequential sampling (multistage)

Comparing Several System Designs -contd Possible Goals –Estimation of each  i –Comparing  i to a control  1 –All possible comparisons –Selection of the best  i

Bonferroni Method for Multiple Comparisons Consider C confidence intervals 1-  i Overall error probability  E =   j Probability all statements are true (the parameter is contained inside all C.I.’s) P  1 -  E Probability one or more statements are false P   E

Example: Auto inspection (contd) Alternative designs for addition of one holding space –Parallel stations –No space between stations in series –One space between brake and headlight inspection –One space between headlight and steering inspection

Bonferroni Method for Selecting the Best System with maximum expected performance is to be selected. System with maximum performance and maximum distance to the second best is to be selected.  i - max j  i  j  

Bonferroni Method for Selecting the Best -contd 1.- Specify ,  and R Make R 0 replications for each of the K systems 3.- For each system i calculate Y i * 4.- For each pair of systems i and j calculate S ij 2 and select the largest S max Calculate R = max{R 0, t 2 S max 2 /  2 } 6.- Make R-R0 additional replications for each of the K systems 7.- Calculate overall means Y i ** = (1/R)  Y ri 8.-Select system with largest Y i ** as the best

Statistical Models to Estimate the Effect of Design Alternatives Statistical Design of Experiments –Set of principles to evaluate and maximize the information gained from an experiment. Factors (Qualitative and Quantitative), Levels and Treatments Decision or Policy Variables.

Single Factor, Randomized Designs Single Factor Experiment –Single decision factor D ( k levels) –Response variable Y –Effect of level j of factor D,  j Completely Randomized Design –Different r.n. streams used for each replication at any level and for all levels.

Single Factor, Randomized Designs -contd Statistical model Y rj =  +  j +  rj where Y rj = observation r for level j  = mean overall effect  j = effect due to level j  rj = random variation in observation r at level j R j = number of observations for level j

Single Factor, Randomized Designs -contd Fixed effects model – levels of factors fixed by analyst –  rj normally distributed –Null hypothesis H 0 :  j = 0 for all j=1,2,..,k –Statistical test: ANOVA (F-statistic) Random effects model –levels chosen at random –  j normally distributed

ANOVA Test Levels-replications matrix Compute level means (over replications) Y. i * and grand mean Y..* Variation of the response w.r.t. Y..* Y rj - Y..* = (Y. j * - Y..*) + (Y rj - Y. j *) Squaring and summing over all r and j SS TOT = SS TREAT + SS E

ANOVA Test -contd Mean square MS E = SS E /(R-k) is unbiased estimator of var(Y). I.e. E(MS E ) =  2 Mean square MS TREAT = SS TREAT /(k-1) is also unbiased estimator of var(Y). Test statistic F = MS TREAT / MS E If H 0 is true F has an F distribution with k-1 and R-k d.o.f. Find critical value of the statistic F 1-  Reject H 0 if F > F 1- 

Metamodeling Independent (design) variables x i, i=1,2,..,k Output response (random) variable Y Metamodel –A simplified approximation to the actual relationship between the x i and Y –Regression analysis (least squares) –Normal equations

Linear Regression One independent variable x and one dependent variable Y For a linear relationship E(Y:x) =  0 +  1 x Simple Linear Regression Model Y =  0 +  1 x + 

Linear Regression -contd Observations (data points) (x i,Y i ) i=1,2,..,n Sum of squares of the deviations  i 2 L =   i 2 =  [ Y i -  0 ’ -  1 (x i - x*)] 2 Minimizing w.r.t  0 ’ and  1 find  0 ’* =  Y i /n  1 * =  Y i (x i - x*)/  (x i - x*) 2  0 * =  0 ’* -  1 * x*

Significance Testing Null Hypothesis H 0 :  1 = 0 Statistic (n-2 d.o.f) t 0 =  1 * /  (MS E /S xx ) where MS E =  (Y i - Y pi )/(n-2) S xx =  x i 2 - (  x i ) 2 /n H 0 is rejected if |t 0 | > t  /2,n-2

Multiple Regression Models Y =  0 +  1 x 1 +  2 x  m x m +  Y =  0 +  1 x +  2 x 2 +  Y =  0 +  1 x 1 +  2 x 2 +  3 x 1 x 2 + 