Chapter 7 Flashcards. overall plan that describes all of the elements of a research or evaluation study, and ideally the plan allows the researcher or.

Slides:



Advertisements
Similar presentations
Overview of Withdrawal Designs
Advertisements

Ch 8: Experimental Design Ch 9: Conducting Experiments
Non-Experimental designs: Developmental designs & Small-N designs
Defining Characteristics
GROUP-LEVEL DESIGNS Chapter 9.
Experimental Research Designs
Other single subject designs part 2
Experimental Design: Single-Participant Designs/ The Operant Approach.
Internal Threats to Validity
Chapter 2 Flashcards.
FUNDAMENTAL RESEARCH ISSUES © 2012 The McGraw-Hill Companies, Inc.
Chapter 12 Quasi-Experimental, Correlational, and Naturalistic Observational 2012 Wadsworth, Cengage Learning.
Educational Action Research Todd Twyman Summer 2011 Week 1.
How do you know it worked
Studying Behavior. Midterm Review Session The TAs will conduct the review session on Wednesday, October 15 th. If you have questions, your TA and.
Non-Experimental designs: Developmental designs & Small-N designs
Basic Research Methodologies
Educational Research by John W. Creswell. Copyright © 2002 by Pearson Education. All rights reserved. Slide 1 Chapter 11 Experimental and Quasi-experimental.
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Experimental Research
L1 Chapter 11 Experimental and Quasi- experimental Designs Dr. Bill Bauer.
Chapter 8 Experimental Research
Applying Science Towards Understanding Behavior in Organizations Chapters 2 & 3.
Research Methods in Psychology
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Choosing a Research Design.
Selecting a Research Design. Research Design Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
CHAPTER 4 ISSUES IN SINGLE SUBJECT RESEARCH Pelaez, EDP 7058.
Experimental and Single-Subject Design PSY440 May 27, 2008.
Chapter 11 Experimental Designs
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 11 Experimental Designs.
SINGLE - CASE, QUASI-EXPERIMENT, AND DEVELOPMENT RESEARCH © 2012 The McGraw-Hill Companies, Inc.
Single- Subject Research Designs
URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design Objectives Basic Design Elements Experimental Designs Comparing Experimental Design Example.
Chapter Four Experimental & Quasi-experimental Designs.
Independent vs Dependent Variables PRESUMED CAUSE REFERRED TO AS INDEPENDENT VARIABLE (SMOKING). PRESUMED EFFECT IS DEPENDENT VARIABLE (LUNG CANCER). SEEK.
Quantitative and Qualitative Approaches
Single-Subject Experimental Research
STUDYING BEHAVIOR © 2009 The McGraw-Hill Companies, Inc.
METHODS IN BEHAVIORAL RESEARCH NINTH EDITION PAUL C. COZBY Copyright © 2007 The McGraw-Hill Companies, Inc.
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Quasi Experimental and single case experimental designs
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Experimental Control Definition Is a predictable change in behavior (dependent variable) that can be reliably produced by the systematic manipulation.
Single-Subject and Correlational Research Bring Schraw et al.
Single- Subject Research Designs
Basic Concepts of Outcome-Informed Practice (OIP).
SINGLE SUBJECT RESEARCH PREPARED FOR: DR EDDY LUARAN PREPARED BY: AFZA ARRMIZA BINTI RAZIF [ ] HANIFAH BINTI RAMLEE IZYAN NADHIRAH BINTI.
Criminal Justice and Criminology Research Methods, Second Edition Kraska / Neuman © 2012 by Pearson Higher Education, Inc Upper Saddle River, New Jersey.
Can you hear me now? Keeping threats to validity from muffling assessment messages Maureen Donohue-Smith, Ph.D., RN Elmira College.
Research designs Research designs Quantitative Research Designs.
Educational Research Experimental Research Chapter 9 (8 th Edition) Chapter 13 (7 th Edition) Gay and Airasian.
CHOOSING A RESEARCH DESIGN
Approaches to social research Lerum
RESEARCH DESIGN Experimental Designs  
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Chapter 12 Single-Case Evaluation Designs
Chapter 4: Studying Behavior
Making Causal Inferences and Ruling out Rival Explanations
Introduction to Design
11 Single-Case Research Designs.
Chapter 6 Research Validity.
Single-Case Designs.
Quasi-Experimental Design
Threats to Internal Validity
Chapter 6 Research Validity.
External Validity.
Single Subject design.
Inferential Statistics
Presentation transcript:

Chapter 7 Flashcards

overall plan that describes all of the elements of a research or evaluation study, and ideally the plan allows the researcher or evaluator to reach valid conclusions (e.g., questions or hypotheses to be addressed, number and types of participants to be included, number and types of variables to be studied, collection and analysis of data) Evaluation/research design

family of research and evaluation designs characterized by the systematic repeated measurement of a clients outcome(s) at regular, frequent, predesignated intervals under different conditions (baseline and intervention), and the evaluation of outcomes over time and under different conditions in order to monitor client progress, identify intervention effects, and more generally, learn when, why, how, and the extent to which client change occurs. Also known as single-subject designs, single- system designs, N = 1 designs, or sometimes time series or interrupted time series designs Single-case design

period of time during which an outcome is measured repeatedly in the absence of an intervention in order to (1) describe the naturally occurring pattern of outcome data (e.g., level, trend, variability) and (2) determine the effect of an intervention on that outcome. Typically symbolized by the letter A Baseline phase

period of time during which an intervention is implemented while an outcome is measured repeatedly Intervention phase

period of time after an intervention has ended during which outcome data are collected to determine the extent to which a clients progress has been maintained. Also known as a maintenance phase Follow-up phase

a variable (e.g., intervention) that produces an effect or is responsible for events or results (e.g., outcome) Cause

change in one variable (e.g., outcome) that occurred at least in part as the result of another variable (e.g., intervention) Effect

measure of the strength of the relationship between variables (e.g., effect of an intervention on an outcome, as quantified by any one of a number of different statistics). Effect size

conclusion based on evidence and reasoning that one variable (e.g., intervention) causes another (e.g., outcome) Causal inference

accuracy of conclusions based on evidence and reasoning about the presence, direction, and strength of relationships between variables (e.g., outcome is different during baseline than intervention). The ability to establish that one variable (e.g., intervention) is related to another (e.g., outcome) is a requirement for inferring that one variable caused another Statistical conclusion validity

accuracy of conclusions based on evidence and reasoning about causal relationships between variables (e.g., extent to which an intervention, as opposed to other factors, caused a change in an outcome). Internal validity

uncertainty about which of several events or processes caused an outcome Causal ambiguity

variable that is associated with the independent variable inadvertently influences the outcome, and consequently makes it difficult to determine the effect of the intervention on the outcome (e.g., an unknown event that occurs during intervention but not baseline and causes the pattern of outcome data to change from baseline to intervention). Also known as a confound or confounding variable, and such a result is said to be confounded Extraneous variable

plausible reasons for a relationship between an intervention and an outcome, other than that the intervention caused the outcome. Also known as alternative hypotheses. The ability to rule out alternative hypotheses is a requirement for inferring that one variable (intervention) caused another (outcome) Alternative explanations

reasons why it might be partly or completely wrong (i.e., invalid) to conclude that one variable (e.g., an intervention) caused another (e.g., an outcome). See also Ambiguous temporal precedence, History effect, Instrumentation effect, Maturation effect, Regression effect, and Testing effect Threats to internal validity

potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by an external event that occurs at the same time as the intervention (e.g., a student who has trouble completing his homework for a new teacher improves his performance as he becomes accustomed to her and her expectations, and the improvement is misinterpreted as being due to the rewards he earns working with a social worker). History effect

potential threat to internal validity in which an apparent change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by a change in how the outcome is measured (e.g., an older mans weight stabilizes when he begins weighing at his physicians office rather than at home). Instrumentation effect

potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by naturally occurring changes in clients over time (e.g., a toddlers tantrums diminish not in response to an intervention, but due to maturing out of the terrible twos, or a child outgrows enuresis). Maturation effect

potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by repeated measurement of the outcome (e.g., a pretest about health behaviors may sensitize a client to the need to make changes in diet and exercise). See also Fatigue effect and Practice effect. Testing effect

deterioration in an outcome caused by fatigue associated with repeated measurement of the outcome (e.g., a mother who is self-recording each instance of time-out with her preschooler reduces those time-outs to avoid recording the behavior) Fatigue effect

improvement in an outcome caused by repeated measurement of the outcome (e.g., taking multiple practice exams may improve a students score on the Graduate Record Exam simply because he or she becomes familiar with the format of the exam) Practice effect

potential threat to internal validity in which change in an outcome could be misinterpreted as an intervention effect, when in fact it is caused by the tendency of an individual with unusually high or low scores on a measure to subsequently have scores closer to the mean (e.g., clients who are depressed frequently seek help when they have hit bottom, and their scores are likely to improve somewhat in the following weeks, even without intervention). Also known as regression toward the mean Regression effect

result due to the order in which different interventions are administered (e.g., a couple may be more successful in an intervention designed to increase their pleasant time together each day if they first complete an intervention designed to increase their reflective listening). Also known as a sequence effect Order effect

single-case design (arguably) consisting of an intervention phase (B) during which the outcome is measured repeatedly B-only design

single-case design (arguably, since there isnt repeated measurement during baseline) consisting of one pre-intervention outcome measurement followed by an intervention phase (B) during which the outcome is measured repeatedly B+ design

two-phase single-case design consisting of a pre-intervention baseline phase (A) followed by an intervention phase (B) A-B design

three-phase single-case design consisting of a pre-intervention baseline phase (A 1 ); an intervention phase (B); and a second baseline phase (A 2 ) in which the intervention is withdrawn to determine if the outcome reverses to the initial baseline pattern A-B-A design

three-phase single-case design beginning with the intervention phase (B 1 ), followed by the withdrawal of the intervention (A) to determine if the outcome changes in the absence of the intervention, and reintroduction of the intervention (B 2 ) to see whether the initial intervention effects are replicated B-A-B design

four-phase single-case design consisting of a pre- intervention baseline phase (A 1 ); an intervention phase (B 1 ); a second baseline phase (A 2 ) in which the intervention is withdrawn to determine if the outcome reverses to the initial baseline pattern; and a reintroduction of the intervention (B 2 ) to see whether the initial intervention effects are replicated. Also known as a reversal or withdrawal design A-B-A-B design

single-case design that begins with a baseline during which the same problem is measured for a single client in two or more settings at the same time. Baseline is followed by the application of the intervention in one setting while baseline conditions remain in effect for other settings, then the intervention is applied sequentially across the remaining settings to see whether intervention effects are replicated across different settings Multiple baseline across settings design

single-case design that begins with a baseline during which the same problem is measured for two or more clients at the same time in a particular setting. Baseline is followed by the application of the intervention to one client while baseline conditions remain in effect for other clients, then the intervention is applied sequentially to remaining clients to see whether intervention effects are replicated across different clients Multiple baseline across subjects (clients) design

single-case design that begins with a baseline during which two or more problems are measured at the same time for a single client in a particular setting. Baseline is followed by the application of the intervention to one problem with baseline conditions remaining in effect for other problems, then the intervention is applied sequentially to the remaining problems to see whether intervention effects are replicated across different problems Multiple baseline across behaviors (problems) design

three-phase single-case design consisting of a pre-intervention baseline (A); an intervention phase (B); and a second intervention phase (C) in which a new intervention is introduced in response to the failure of the first intervention to produce sufficient improvement in the outcome A-B-C design

three-phase single-case design consisting of a pre-intervention baseline (A); an intervention phase (B); and a second intervention phase in which a new intervention (C) is added to the first intervention in response to the failure of the first intervention to produce sufficient improvement in the outcome A-B-BC design

accuracy of conclusions based on evidence and reasoning about how well a causal relationship applies across or beyond the people, settings, treatment variables, and measurement variables that were studied (e.g., extent to which a causal relationship between an intervention and outcome is the same with different people) External validity