Research vs Experiment

Slides:



Advertisements
Similar presentations
Educational Research: Causal-Comparative Studies
Advertisements

Analysis by design Statistics is involved in the analysis of data generated from an experiment. It is essential to spend time and effort in advance to.
LSU-HSC School of Public Health Biostatistics 1 Statistical Core Didactic Introduction to Biostatistics Donald E. Mercante, PhD.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Chapter 3 Analysis of Variance
Research Study. Type Experimental study A study in which the investigator selects the levels of at least one factor Observational study A design in which.
Experimental Design Research vs Experiment. Research A careful search An effort to obtain new knowledge in order to answer a question or to solve a problem.
Chapter 28 Design of Experiments (DOE). Objectives Define basic design of experiments (DOE) terminology. Apply DOE principles. Plan, organize, and evaluate.
Experimental Design The Research Process Defining a Research Question.
Statistics: The Science of Learning from Data Data Collection Data Analysis Interpretation Prediction  Take Action W.E. Deming “The value of statistics.
8. ANALYSIS OF VARIANCE 8.1 Elements of a Designed Experiment
Chapter 1: Data Collection
11 Populations and Samples.
Incomplete Block Designs
TOOLS OF POSITIVE ANALYSIS
Experimental design, basic statistics, and sample size determination
Experimental design and sample size determination Karl W Broman Department of Biostatistics Johns Hopkins University
Experimental Design in Agriculture CROP 590, Winter, 2015
Fig Theory construction. A good theory will generate a host of testable hypotheses. In a typical study, only one or a few of these hypotheses can.
Chapter 1: Introduction to Statistics
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
INT 506/706: Total Quality Management Introduction to Design of Experiments.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Research Design. Research is based on Scientific Method Propose a hypothesis that is testable Objective observations are collected Results are analyzed.
Chap 20-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 20 Sampling: Additional Topics in Sampling Statistics for Business.
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
The Research Enterprise in Psychology. The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Factorial Design of Experiments Kevin Leyton-Brown.
The Scientific Method Formulation of an H ypothesis P lanning an experiment to objectively test the hypothesis Careful observation and collection of D.
Chapter 2 The Research Enterprise in Psychology. Table of Contents The Scientific Approach: A Search for Laws Basic assumption: events are governed by.
The success or failure of an investigation usually depends on the design of the experiment. Prepared by Odyssa NRM Molo.
Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Chapter 1 Measurement, Statistics, and Research. What is Measurement? Measurement is the process of comparing a value to a standard Measurement is the.
Assumes that events are governed by some lawful order
Study Session Experimental Design. 1. Which of the following is true regarding the difference between an observational study and and an experiment? a)
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Question paper 1997.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
C82MST Statistical Methods 2 - Lecture 1 1 Overview of Course Lecturers Dr Peter Bibby Prof Eamonn Ferguson Course Part I - Anova and related methods (Semester.
Chapter 2 The Research Enterprise in Psychology. Table of Contents The Scientific Approach: A Search for Laws Basic assumption: events are governed by.
1 Module One: Measurements and Uncertainties No measurement can perfectly determine the value of the quantity being measured. The uncertainty of a measurement.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
1 Simulation Scenarios. 2 Computer Based Experiments Systematically planning and conducting scientific studies that change experimental variables together.
CHAPTER 9: Producing Data Experiments ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
1 Prepared by: Laila al-Hasan. 1. Definition of research 2. Characteristics of research 3. Types of research 4. Objectives 5. Inquiry mode 2 Prepared.
Data Analysis. Qualitative vs. Quantitative Data collection methods can be roughly divided into two groups. It is essential to understand the difference.
Uses of Diagnostic Tests Screen (mammography for breast cancer) Diagnose (electrocardiogram for acute myocardial infarction) Grade (stage of cancer) Monitor.
Slide 1 DESIGN OF EXPERIMENT (DOE) OVERVIEW Dedy Sugiarto.
Factorial Experiments
Comparing Three or More Means
Understanding Results
CHAPTER 4 Designing Studies
RESEARCH DESIGN.
Experimental Design Research vs Experiment
CHAPTER 4 Designing Studies
Research vs Experiment
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
Experimental Design All experiments consist of two basic structures:
Introduction to Experimental Design
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
DESIGN OF EXPERIMENTS by R. C. Baker
CHAPTER 4 Designing Studies
Chapter 10 – Part II Analysis of Variance
CHAPTER 4 Designing Studies
STATISTICS INFORMED DECISIONS USING DATA
Presentation transcript:

Research vs Experiment

Research A careful search An effort to obtain new knowledge in order to answer a question or to solve a problem A protocol for measuring the values of a set of variables (response variables) under a set of condition (study condition)

Three Purposes of Research Exploration Description Explanation RESEARCH DESIGN (a plan for research): The outline, plan, or strategy specifying the procedure to be used in answering research questions

Method of Research Case Studies Aggregate Data Analysis Field Research Archival Research Aggregate Data Analysis Existing or Field Collection Surveys (observation) Experiments Use Multiple Methods Whenever Possible

Research design Observational A design in which the levels of all the explanatory variables are determined as part of the observational process Experimental A study in which the investigator selects the levels of at least one factor An investigation in which the investigator applies some treatments to experimental units and then observes the effect of the treatments on the experimental units by measuring one or more response variables An inquiry in which an investigator chooses the levels (values) of input or independent variables and observes the values of the output or dependent variable (s).

Strengths of observation It can be used to generate hypotheses It can be used to negate a proposition It can be used to identify contingent relationships Limitations of Observation It cannot be used to test hypotheses Poor representative ness Poor replicability Observer bias

Strengths of experimental Causation can be determined (if properly designed) The researcher has considerable control over the variables of interest It can be designed to evaluate multiple independent variables Limitations of experimental Not ethical in many situations Often more difficult and costly

Design of Experiments Define the objectives of the experiment and the population of interest. Identify all sources of variation. Choose an experimental design and specify the experimental procedure.

Defining the Objectives What questions do you hope to answer as a result of your experiment? To what population do these answers apply? INTERNAL USE FIG. 02s02f04 INTERNAL USE FIG. 02s02f03

Defining the Objectives INTERNAL USE FIG. 02s02a

Identifying Sources of Variation INTERNAL USE FIG. 02s02f05 Output Variable Input Variable

Choosing an Experimental Design

Experimental Design A controlled study in which one or more treatments are applied to experimental units. A plan and a structure to test hypotheses in which the analyst controls or manipulates one or more variables Protocol for measuring the values of a set of variable It contains independent and dependent variables 4

What is a statistical experimental design? Determine the levels of independent variables (factors) and the number of experimental units at each combination of these levels according to the experimental goal. What is the output variable? Which (input) factors should we study? What are the levels of these factors? What combinations of these levels should be studied? How should we assign the studied combinations to experimental units?

The Six Steps of Experimental Design Plan the experiment. Design the experiment. Perform the experiment. Analyze the data from the experiment. Confirm the results of the experiment. Evaluate the conclusions of the experiment.

Plan the Experiment Identify the dependent or output variable(s). Translate output variables to measurable quantities. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Identify potential combined actions between factors.

Well-planned Experiment Simplicity Degree of Precision Absence of systematic error Range of validity of conclusion Calculation of degree of uncertainty

Well-planned Experiment Simplicity The selection of treatments and experimental arrangement should be as simple as possible, consistent with the objectives of the experiment Degree of Precision The probability should be high that the experiment will be able to measure differences with the degree of precision the experimenter desires. This implies an appropriate design and sufficient replication Absence of systematic error The experiment must be planned to ensure that the experimental units receiving one treatment in no systematic way differ from those receiving another treatment so that an unbiased estimate of each treatment effect can be obtained

Well-planned Experiment Range of validity of conclusion Conclusion should have as wide a range of validity as possible. An experiment replicated in time and space would increase the range of validity of the conclusions that could be drawn from it. A factorial set of treatments is another way for increasing the range of validity of an experiment. In a factorial experiment, the effect of one factor are evaluated under varying levels of a second factor Calculation of degree of uncertainty In any experiment, there is always some degree of uncertainty as to the validity of the conclusions. The experiment should be designed so that it is possible to calculate the probability of obtaining the observed results by chance alone

Important steps in experiment Definition of the problem Statement of the objectives Selection of treatment Selection of experimental material Selection of experimental design Selection of the unit of observation and the number of replication Control the effect of the adjacent units on each other Consideration of data to be collected Outlining statistical analysis and summarization of results Conducting the experiment Analyzing data and interpreting results Preparation of a complete, readable and correct report of the research

Important steps in experiment Definition of the problem State the problem clearly and concisely Statement of the objectives Objectives should be written out in precise terms Selection of treatment Careful selection of treatment Selection of experimental material The material used should be representative of the population on which the treatment will be tested Selection of experimental design Choose the simplest design that is likely to provide the precision Selection of the unit of observation and the number of replication Plot size and the number of replications should be chosen to produce the required precision of treatment estimate

Important steps in experiment Control the effect of the adjacent units on each other Use border rows and by randomization of treatment Consideration of data to be collected The data collected should properly evaluate treatment effect in line with the objectives of the experiment Outlining statistical analysis and summarization of results Write out the SV, DF, SS, MS and F-test Conducting the experiment Used procedures that are free from biases Analyzing data and interpreting results Preparation of a complete, readable and correct report of the research

Syllabus Content Week Terminology and basic concept 2 T-test, anova and CRD 3 RCBD and Latin Square 4 Mean comparison 7 Midterm 8 - 9 10 Factorial experiment 11 - 12 Special topic in factorial experiment 13

B – D → 45 – 80 (Normal distribution) Grading system Grade : 0 – 100 A > 80 B – D → 45 – 80 (Normal distribution) E < 45 Grade composition Assignment : 30 67 Mid-term Final Exam 40 Practical Work 33

Treatment/ input/independent variable Terminology Variable A characteristic that varies (e.g., weight, body temperature, bill length, etc.) Treatment/ input/independent variable Set at predetermined levels decided by the experimenter A condition or set of conditions applied to experimental units The variable that the experimenter either controls or modifies What you manipulate What you evaluate Single factor ≥ 2 factors 4

Terminology Factors Another name for the independent variables of an experimental design An explanatory variable whose effect on the response is a primary objective of the study A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control An explanatory variable that can take any one of two or more values. The design of the experiment will largely consist of a policy for determining how to set the factors in each experimental trial 4

Terminology Levels or Classifications The subcategories of the independent variable used in the experimental design The different values of a factor Dependent/response/output variable A quantitative or qualitative variable that represents the variable of interest. The response to the different levels of the independent variables A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units

Full Factorial Treatment Design Terminology Treatment Factor A factor whose levels are chosen and controlled by the researcher to understand how one or more response variables change in response to varying levels of the factor Treatment Design The collection of treatments used in an experiment. Full Factorial Treatment Design Treatment design in which the treatments consist of all possible combinations involving one level from each of the treatment factors.

Terminology Experimental unit The unit of the study material in which treatment is applied The smallest unit of the study material sharing a common treatment The physical entity to which a treatment is randomly assigned and independently applied a person, object or some other well-defined item upon which a treatment is applied. Observational unit (sampling unit) The smallest unit of the study material for which responses are measured. The unit on which a response variable is measured. There is often a one-to-one correspondence between experimental units and observational units, but that is not always true.

Stratification (blocking) Basic principles Comparison/control Replication Randomization Stratification (blocking)

Comparison/control Good experiments are comparative Comparing the effect of different nitrogen dosages on rice yield Comparing the potential yield of cassava clones Comparing the effectiveness of pesticides Ideally, the experimental group is compared to concurrent controls (rather than to historical controls).

Applying a treatment independently to two or more experimental units Replication Applying a treatment independently to two or more experimental units The number of experimental units for which responses to a particular treatment are observed The usages reduce the effect of uncontrolled variation (i.e. increase precision). Estimate the variability in response that is not associated with treatment different Improve the reliability of the conclusion drawn from the data quantify uncertainty

Replication

Randomization Random assignment of treatments to experimental units. Experimental subjects (“units”) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, or Coins, dice or cards.

Why randomize? Allow the observed responses to be regarded as random sampling from appropriate population Eliminate the influence of systematic bias on the measured value Control the role of chance Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis.

Stratification (Blocking) Grouping similar experimental units together and assigning different treatments within such groups of experimental units A technique used to eliminate the effects of selected confounding variables when comparing the treatment If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis.

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications)

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Completely randomized design (4 treatments x 4 replications) Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units)

Cage positions Randomized block design A (4 treatments x 4 blocks= 16 experimental units) A

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Cage positions Randomized block design (4 treatments x 4 blocks= 16 experimental units) A C B D

Randomization and stratification If you can (and want to), fix a variable. e.g., use only 8 week old male mice from a single strain. If you don’t fix a variable, stratify it. e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it.

Experiment Single Factor Experiment 1. Treatments consists of one factor 2. Treatments which are factors are treated as one type 3. There is no treatment design Multiple Factor Experiment (Factorial experiment) 1. Treatments consist of ≥ 2 factors 2. We are interested in interaction identification 3. There is treatment design

Interactions

Distribution of D when  = 0 Significance test Based on statistical distribution which depends on the tested parameter  = true difference in average two samples (the treatment effect). H0:  = 0 (i.e., no effect) Test statistic, D. If |D| > C, reject H0. C (critical value) chosen so that the chance you reject H0, if H0 is true, is 5% Distribution of D when  = 0

Statistical power Power: The chance that you reject H0 when H0 is false (i.e., you [correctly] conclude that there is a treatment effect when there really is a treatment effect).

Power depends on… The structure of the experiment The method for analyzing the data The size of the true underlying effect The variability in the measurements The chosen significance level () The sample size Note: We usually try to determine the sample size to give a particular power (often 80%).

Effect of sample size 6 per group: 12 per group:

Various effects Desired power   sample size  Stringency of statistical test   sample size  Measurement variability   sample size  Treatment effect   sample size 

Determining sample size The things you need to know: Structure of the experiment Method for analysis Chosen significance level,  (usually 5%) Desired power (usually 80%) Variability in the measurements - if necessary, perform a pilot study The smallest meaningful effect

Reducing sample size Reduce the number of treatment groups being compared. Find a more precise measurement (e.g., average time to effect rather than proportion sick). Decrease the variability in the measurements. Make subjects more homogeneous. Use stratification. Control for other variables (e.g., weight). Average multiple measurements on each subject.