EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011.

Slides:



Advertisements
Similar presentations
Introduction to Psychology
Advertisements

Andrea M. Landis, PhD, RN UW LEAH
Agenda Group Hypotheses Validity of Inferences from Research Inferences and Errors Types of Validity Threats to Validity.
EMR 6550: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2013.
Research designs and methods
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn & Carl D. Westine October 14, 2010.
Inadequate Designs and Design Criteria
GROUP-LEVEL DESIGNS Chapter 9.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2012.
Post-Positivist Perspectives on Theory Development
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2012.
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2013.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn & Carl D. Westine October 7, 2010.
Correlation AND EXPERIMENTAL DESIGN
Culture and psychological knowledge: A Recap
Research Design: The Experimental Model and Its Variations
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Reliability and Validity in Experimental Research ♣
MSc Applied Psychology PYM403 Research Methods Validity and Reliability in Research.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 4 Choosing a Research Design.
Psych 231: Research Methods in Psychology
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
EVAL 6970: Experimental and Quasi-Experimental Designs
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Dr. Anne Cullen Spring 2012.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2013.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Experimental Design 264a Marketing Research. Criteria for Establishing a Causal Relationship Concomitant variation Temporal variation Control over other.
September 26, 2012 DATA EVALUATION AND ANALYSIS IN SYSTEMATIC REVIEW.
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
Chapter 8 Experimental Research
Experimental Design The Gold Standard?.
2.4. Design in quantitative research Karl Popper’s notion of falsification and science – If a theory is testable and incompatible with possible empirical.
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Choosing a Research Design.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Chapter 3 The Research Design. Research Design A research design is a plan of action for executing a research project, specifying The theory to be tested.
Day 6: Non-Experimental & Experimental Design
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
 Internal Validity  Construct Validity  External Validity * In the context of a research study, i.e., not measurement validity.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
Understanding Research Design Can have confusing terms Research Methodology The entire process from question to analysis Research Design Clearly defined.
Independent vs Dependent Variables PRESUMED CAUSE REFERRED TO AS INDEPENDENT VARIABLE (SMOKING). PRESUMED EFFECT IS DEPENDENT VARIABLE (LUNG CANCER). SEEK.
Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Introduction section of article
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
SOCW 671: #6 Research Designs Review for 1 st Quiz.
EXPERIMENTS AND EXPERIMENTAL DESIGN
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
School of Public Administration & Policy Dr. Kaifeng Yang 研究设计 : 实验研究的基本问题.
Research Designs for Explanation Experimental, Quasi-experimental, Non-experimental, Observational.
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Causation & Research Designs
Experimental Research Designs
Chapter Eight: Quantitative Methods
Making Causal Inferences and Ruling out Rival Explanations
Introduction to Design
External Validity.
Experimental Research
Experimental Research
Research Methods & Statistics
Misc Internal Validity Scenarios External Validity Construct Validity
Presentation transcript:

EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011

Agenda Stage One theories – Donald T. Campbell Questions and discussion Encyclopedia of Evaluation entries

“We would improve program evaluation if we were alert to opportunities to move closer to the experimental model” — Donald T. Campbell

Biographical Sketch Born in 1917, died in 1996 Ph.D. in Psychology, University of California, Berkeley Author or more than 235 publications Recipient of numerous honorary degrees, awards, and prizes Intellectual work included psychological theory, methods, sociology of science, and epistemology

Campbell’s View of Evaluation Evaluation should be a part of a rational society in which decisions depend on the results of rigorous tests of bold attempts to improve social problems Evaluators should play a servant- methodologist role rather than an advisory role commensurate with democratic values

Campbell’s Influence Lionized as the father of scientific evaluation Developed and legitimated scientific methods of evaluation The utopian view of an ‘experimenting society’

Campbell’s Major Contributions Evolutionary epistemology Validity theory and threats to validity Experimental and quasi-experimental methods Open, mutually reinforcing but critical commentary on knowledge claims (a disputatious community of truth seekers)

Randomized Experiments Provide ‘best’ scientific evidence of cause-and-effect relationships Premised on expectancy of equivalence of units through randomly assigning units to two or more conditions Priority is to reduce internal validity threats

Validity The approximate truthfulness or correctness of an inference or conclusion – Supported by relevant evidence as being true or correct – Such evidence comes from both empirical findings and the consistency of those findings coupled with other sources of knowledge – Is a human judgment and fallible – Not an either or claim, it is one of degree

Major Types of Validity Internal validity: The validity of inferences about whether the relationship between two variables is causal. Construct validity: The degree to which inferences are warranted from the observed persons, settings, treatments, and cause-effect operations sampled within a study to the constructs that these samples represent. External validity: The validity of inferences about whether a causal relationship holds over variations in persons, settings, treatment variables, and measurement variables. Statistical conclusion validity: The validity of inferences about the covariation between two variables.

Threats to Internal Validity Ambiguous temporal precedence: Lack of clarity about which variable occurred first may yield confusion about which variable is the cause and which is the effect Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect History: Events occurring concurrently with treatment that could cause the observed effect Maturation: Naturally occurring changes over time could be confused with a treatment effect

Threats to Internal Validity Regression: When units are selected for their extreme scores, they will often have less extreme scores on other variables, an occurrence that can be confused with a treatment effect Attrition: Loss of respondents to treatment of to measurement can produce artifactual effects if that loss is systematically correlated with conditions Testing: Exposure to a test can affect scores on subsequent exposures to that test, an occurrence that can be confused with a treatment effect Instrumentation: The nature of a measure may change over time or conditions in a way that could be confused with a treatment effect Additive and interactive threats: The impact of a threat can be additive to that of another threat or may depend on the level of another threat

Flow of units through a typical randomized experiment

Basic Design Notation RRandom assignment NRNonrandom assignment OObservation XTreatment XRemoved treatment X+X+ Treatment expected to produce an effect in one direction X-X- Conceptually opposite treatment expected to reverse an effect CCutting score - - -Non-randomly formed groups …Cohort

Campbell’s Theory of Social Programming Three worlds 1.The current world: Client needs are not the driving force behind political and administrative behavior 2.The current world as it can be marginally modified: Improvement through demonstrations 3.The utopian world: Critical reality checks and the experimenting society

Campbell’s Theory of Knowledge Construction Grounded in epistemological relativism (knowledge is impossible without active knowers) Never knowing what is true and imperfectly knowing what is false Evolutionary theory of knowledge growth Not all methods yield equally strong inferences

Campbell’s Theory of Valuing Valuing should be left to the political process, not researchers (descriptive valuing) Evaluators are not the arrogant guardians of truth Multidimensional measurement that is inclusive of democratic values

Campbell’s Theory of Knowledge Use Use is the concern of the political process, not evaluators Evaluations are only worth using if they have withstood the most rigorous tests Most concerned with misuse – Methodological biases – Control of content or dissemination

Campbell’s Theory of Evaluation Practice Application of experimental design to answer summative questions Priority given to internal validity Theoretical explanation is best left to basic researchers Evaluation resources should be focused on pilot and demonstration projects

Encyclopedia Entries Bias Causation Checklists Chelimsky, Eleanor Conflict of Interest Countenance Model of Evaluation Critical Theory Evaluation Effectiveness Efficiency Empiricism Independence Evaluability Assessment Evaluation Use Fournier, Deborah Positivism Relativism Responsive evaluation Stake, Robert Thick Description Utilization of Evaluation Weiss, Carol Wholey, Joseph