Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slide 1 DESIGN OF EXPERIMENT (DOE) OVERVIEW Dedy Sugiarto.

Similar presentations


Presentation on theme: "Slide 1 DESIGN OF EXPERIMENT (DOE) OVERVIEW Dedy Sugiarto."— Presentation transcript:

1 Slide 1 DESIGN OF EXPERIMENT (DOE) OVERVIEW Dedy Sugiarto

2 Slide 2 DOE is a methodology for systematically applying statistics to experimentation (Lye, 2005). There are many types of DOE techniques that include single factor experiment, general full factorial, two-level full factorial, fractional factorial, response surface designs, mixture designs and taguchi designs.

3 Slide 3 An experiment is a test or series of tests in which purposeful changes are made to the input variables of a process or system so that we may observe and identify the reasons for changes that may be observed in the output response (Montgomery 2005) ]

4 Slide 4 The General Model

5 Slide 5 Examples of simple experiments NoResponse or outputFactors or input variables 1taste (maximize), un- popped kernels (minimize) of microwave popcorns brand, time, power 2virus scan timeRAM cache, program size, operating system 3time to boil waterpan type, burner size, cover, amount of water, lid on or off, size of pan

6 Slide 6 NoResponseFactors 4distance paper aeroplane flew design, paper weight, angle 5blending time for soy beans blending speed, amount of water, temperature of water, soaking time before blending 6height of cakeoven temperature, length of heating, amount of water 7length of rubber band before it broke brand of rubber band, size, temperature

7 Slide 7 Introduction We can usually visualize the process as a combination of machines, methods, people, and other resources that transform some input (often a material) into an output that has one or more observable responses. Some of the process variables x 1, x 2, …, x p are controllable.

8 Slide 8 Introduction Whereas other variables z 1, z 2, …, z q are uncontrollable (although they may be controllable for purposes of a test). The objectives of the experiment may include the following:  Determining which variables are most influential on the response y.

9 Slide 9 Objectives of Experiment  Determining where to set the influential x ’s so that y is almost always near the desired nominal value.  Determining where to set the influential x ’s so that variability in y is small.  Determining where to set the influential x ’s so that the effects of the uncontrollable variables z 1, z 2, …, z q are minimized.

10 Slide 10 Basic Principles In order to perform the experiment most efficiently, a scientific approach to planning the experiment must be employed. Statistical design of experiments refers to the process of planning the experiment so that appropriate data that can be analyzed by statistical methods will be collected, resulting in valid and objective conclusions.

11 Slide 11 Thus, there are two aspects to any experimental problem: the design of the experiment and the statistical analysis of the data. The three basic principles of experimental design are:  Replication: The repetition of the basic experiment. Basic Principles

12 Slide 12  Randomization: Randomly determination of the experiment.  Blocking: The assumption of homogeneous conditions. Introduction (Basic Principles)

13 Slide 13 Introduction (Replication) Replication has two important properties.  First, it allows the experimenter to obtain an estimate of the experimental error. This estimate of error becomes a basic unit of measurement for determining whether the observed differences in the data are really statistically different.

14 Slide 14  Second, if the sample mean is used to estimate the effect of a factor in the experiment, replication permits the experimenter to obtain a more precise estimate of this effect. There is an important distinction between replication and repeated measurements! Introduction (Replication) Replication reflects sources of variability both between runs and (potentially) within runs.

15 Slide 15 By randomization we mean that both the allocation of the experimental material and the order in which the individual runs or trials of the experiment to be performed are randomly determined. Statistical methods require that the observations (or errors) be independently distributed random variables. Introduction (Randomization)

16 Slide 16 Randomization usually makes this assumption valid! By properly randomizing the experiment, we also assist in “averaging out” the effects of extraneous factors that may be present. Introduction (Randomization)

17 Slide 17 Blocking is a design technique used to improve the precision with which comparisons among the factors of interest are made. Often blocking is used to reduce or eliminate the variability transmitted from nuisance factors. Introduction (Blocking) Generally, a block is a set of relatively homogeneous experimental conditions!

18 Slide 18 Fixed or Random Factors ? Fixed Factor we wish to test hypotheses about the treatment means, and our conclusion will apply only to the factor levels considered in the analysis Ho :  1 =  2 = …. =  a H1 :  i ≠  j for at least one pair (i,j)

19 Slide 19 Experiments with fixed factors Example : Popcorn experiment 0bjective : To investigate the influence of pop corn brands on the proportion of un- popped kernels (minimize).

20 Slide 20 Random Factor We wish to test hyphotesis about the variability and try to estimate this variability We are able to extend the conclusions (which are based on the sample of treatments) to all treatments in the populations

21 Slide 21 Experiments with random factors Example : Measurement System Analysis 0bjective : determine how much of your observed process variation is due to measurement system variation.

22 Slide 22

23 Slide 23 An outline of the recommended procedure is as follows:  Recognition of and statement of the problem.  Selection of the response variable.  Choice of factors, levels, and ranges.  Choice of experimental design. The Guidelines

24 Slide 24  Performing the experiment.  Statistical analysis of the data.  Conclusions and recommendations. Introduction (The Guidelines)

25 Slide 25 Example of single factor experiment with qualitative factor (Popcorn experiment) For example, we may want to investigate the influence of pop corn brands on the proportion of un-popped kernels (minimize). We use completely randomize design or without blocking of experimental unit for this single factor experiment. There are tree levels for brand (A, B and C) and tree replications for each lavel. We use one hundred kernels for each trial and 3,5 minutes to make pop corn on stove.

26 Slide 26 We wish to test hypotheses about the treatment means, and our conclusion will apply only to the factor levels considered in the analysis H o :  1 =  2 = …. =  a H 1 :  i ≠  j for at least one pair (i,j)

27 Slide 27 Randomization using Minitab : Run order Un-popped kernels proportion Brand C Brand B Brand A Brand C Brand B Brand A Brand B

28 Slide 28 Picture 1. Three brands of pop corn Picture 2. Processing of pop corn

29 Slide 29 The results of experiment : Picture 3. Popped and un- popped kernels from tree brands Run order Un-popped kernels proportion Brand C0,04 Brand C0,05 Brand B0,11 Brand A0,00 Brand C0,08 Brand B0,13 Brand A0,03 Brand A0,03 Brand B0,08

30 Slide 30 Minitab Output : One-way ANOVA: Pooled StDev = 0,02134 Source DF SS MS F P Brand 2 0,011356 0,005678 12,46 0,007 Error 6 0,002733 0,000456 Total 8 0,014089 S = 0,02134 R-Sq = 80,60% R-Sq(adj) = 74,13% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ---+---------+---------+---------+------ Brand A 3 0,02000 0,01732 (-------*-------) Brand B 3 0,10667 0,02517 (-------*------) Brand C 3 0,05667 0,02082 (------*-------) ---+---------+---------+---------+------ 0,000 0,040 0,080 0,120

31 Slide 31 Interpreting the Results : The small p-values for the brand (p = 0.007) that lower than α ( 0.05) suggest there is significant effect of brand on proportion of un-popped kernels. Individual 95% confidence interval for mean of three brand suggest that brand A has significantly difference with brand B.

32 Slide 32 Example of single factor experiment with quantitative factor (Cotton experiment) A product development engineer is interested in investigating the tensile strength of a new synthetic fiber that will be used to make cloth for men’s shirts. The engineer knows from previous experience that the strength is affected by the weight percent of cotton used in the blend of materials for the fiber.

33 Slide 33 The Example Furthermore, she suspects that increasing the cotton content will increase the strength, at least initially. She also knows that cotton content should range between about 10 and 40 percent if the final product is to have other quality characteristics that are desired (such as the ability to take a permanent-press finishing treatment).

34 Slide 34 The Example The engineer decides to test specimens at five levels of cotton weight percent: 15, 20, 25, 30, and 35 percent. She also decides to test five specimens at each level of cotton content. This is a single-factor experiment with a = 5 levels of the factor and n = 5 replicates. The 25 runs should be made in random order.

35 Slide 35 The Example Cotton Weight Percentage Observations 12345 1577 119 2012171218 251418 19 301925221923 35710111511 The tensile strength data

36 Slide 36

37 Slide 37


Download ppt "Slide 1 DESIGN OF EXPERIMENT (DOE) OVERVIEW Dedy Sugiarto."

Similar presentations


Ads by Google