DENT 514: Research Methods

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Reliability and Validity
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
The Research Consumer Evaluates Measurement Reliability and Validity
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
VALIDITY AND RELIABILITY
Survey Methodology Reliability & Validity
Part II Sigma Freud & Descriptive Statistics
VALIDITY vs. RELIABILITY by: Ivan Prasetya. Because of measuring the social phenomena is not easy like measuring the physical symptom and because there.
Methods for Estimating Reliability
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
LECTURE 9.
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
Operationalizing Concepts Issues in operationally defining your concepts, validity, reliability and scales.
Definition & Measurement “measurement is the beginning of science, … until you can measure something, your knowledge is meager and unsatisfactory” Lord.
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
SOWK 6003 Social Work Research Week 5 Measurement By Dr. Paul Wong.
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Test Validity S-005. Validity of measurement Reliability refers to consistency –Are we getting something stable over time? –Internally consistent? Validity.
Choosing tests for EEF evaluations – reliability and validity and other issues Steve Higgins & Carole Torgerson
Reliability and Validity what is measured and how well.
Instrumentation.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Technical Adequacy Session One Part Three.
The Basics of Experimentation Ch7 – Reliability and Validity.
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Independent vs Dependent Variables PRESUMED CAUSE REFERRED TO AS INDEPENDENT VARIABLE (SMOKING). PRESUMED EFFECT IS DEPENDENT VARIABLE (LUNG CANCER). SEEK.
Reliability vs. Validity.  Reliability  the consistency of your measurement, or the degree to which an instrument measures the same way each time it.
METHOD in Personality Research. How do we gather data? 1. From whom??? 1. From whom??? A. Self A. Self B. Others B. Others Plus/Minus? Plus/Minus?
Measurement Validity.
Advanced Research Methods Unit 3 Reliability and Validity.
Developing Measures Concepts as File Folders Three Classes of Things That can be Measured (Kaplan, 1964) Direct Observables--Color of the Apple or a Check.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
RELIABILITY AND VALIDITY OF ASSESSMENT
Evaluating Survey Items and Scales Bonnie L. Halpern-Felsher, Ph.D. Professor University of California, San Francisco.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Reliability: The degree to which a measurement can be successfully repeated.
Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect.
 Measuring Anything That Exists  Concepts as File Folders  Three Classes of Things That can be Measured (Kaplan, 1964) ▪ Direct Observables--Color of.
MEASUREMENT: PART 1. Overview  Background  Scales of Measurement  Reliability  Validity (next time)
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Validity & Reliability. OBJECTIVES Define validity and reliability Understand the purpose for needing valid and reliable measures Know the most utilized.
Language Assessment Lecture 7 Validity & Reliability Instructor: Dr. Tung-hsien He
Measurement and Scaling Concepts
VALIDITY What is validity? What are the types of validity? How do you assess validity? How do you improve validity?
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Survey Methodology Reliability and Validity
MGMT 588 Research Methods for Business Studies
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
Reliability and Validity
Reliability & Validity
Test Validity.
Journalism 614: Reliability and Validity
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
5. Reliability and Validity
RESEARCH METHODS Lecture 18
Methodology Week 5.
Presentation transcript:

DENT 514: Research Methods Masahiro Heima, DDS, PhD

Lecture 6: Methodology 2: Cross sectional, Longitudinal Study, Split Mouth Design, Crossover Designs, and Questionnaires (reliability and validity),

Cross-Sectional study vs. Longitudinal study Cross-sectional study: an observational study which involves a observation of variables from subjects at one specific point in time. We can see differences in the variables between groups. Longitudinal study: an observational study which involves repeated observations of the same variables from the same subjects over a period of time. We can see differences in the variables between groups as well as time.

Split Mouth Design One side of the mouth is the “control” side and the other is the “study” side (within-subject designs) This design removes all differences between subjects. Carry-over Efects: one side affects another side mouth (ex. Fluoride applied on one side of the mouth can affect the other side of the mouth.) Counterbalancing: necessary in order to control the carryover/order effect, through randomization.

Dental Caries Examination Isaacs (Caries Research, 1999) Subjects: 150 children, 9‐12‐years‐old (high risk population) Methods: Split mouth Half of the mouth was examined with loupes and the other half with explorer. DFS was counted at baseline and in 8 months. Results: Loupe side = 2.1 fold increase Explorer side = 4.5 fold increase

Carryover Effects: needs “washout” Crossover design Each participant received a control condition and an intervention. Carry-over effects (“Order" effects, “Learning" effect) “Washout:” Any carryover effect is washed out as a consequence of more than sufficient time between visit 1 and 2 Carryover Effects: needs “washout” Visit 1 Visit 2 Group 1 Control Intervention Group 2 Intervention Control

Surveys(Questionnaires) “Easy” Relatively low cost Methods: Face to face, Telephone, Letter, Email (Text), Web based and mixed Sampling method (possible bias) Sample size (Large) Response rate (at least 60-70 %: “adequate”) Reliability and Validity

Methodology Development of questionnaire PROMIS® Instrument Development and Validation Scientific Standards Version 2.0 (revised May 2013) http://www.nihpromis.org/Documents/PROMISStandards_Vers2.0_Final.pdf Design of questionnaire The Total Design Method/Tailored Design Method (TDM) (by Dillman): to improve quality of response and to increase response rate

Dental visit In the past 12 months, how many times did you see a dentist? Last year, how many times did you see a dentist?

Questionnaires (reliability and validity) Reliability (consistency) and validity (accuracy) Reliability is the degree to which an assessment tool produces stable and consistent results. Validity refers to how well a test measures what it is purported to measure.  High reliability Low validity Low reliability High validity Low reliability Low validity High reliability High validity

Reliability For tools (questionnaire, assessment, evaluation, etc.) Stability Reliability (test-retest) Internal Consistency Reliability Parallel forms reliability For researchers (the reliability of researchers) Interobserver Reliability (Interrater Reliability) Intraobserver Reliability

Reliability Stability Reliability (test-retest): To measure stability of instruments over time-Same test different time. It is used when the interest or phenomenon is stable/unchanging. Works for the Trait Anxiety Inventory ,which measures how easily a person becomes anxious. However, does not work for the State Anxiety Inventory, which measures the level of anxiety at various points in time. You want to know how much the two variables (first time and second time) are similar. What kind of statistics do you want to use? Answer: a correlation analysis Time 2 Time 1

Reliability Internal Consistency Reliability (Inter-item Reliability): tests questions designed to measure the same concept. (Cronbach’s alpha) (Split-half Reliability test is also used) Example How long do you study? (less than 1 min,…) Do you study hard? (yes or no) Do you discuss your questions with your professor? (yes or no) Do you drive a car? (yes or no) This is different.

Reliability Parallel forms reliability: tests two measurements which have identical (similar) concepts-different forms at the same time e.g.: Anxiety questionnaire and Fear questionnaire Comparing the new developing form with the “standard” form You want to know how much the two variables are similar. What kind of statistics do you want to use? Answer: correlation analysis Variable 2 Variable 1

Reliability Interobserver Reliability (Interrater Reliability): tests observers (raters) of a research team measuring the same thing. Addresses the consistency of the implementation of a rating system. Multiple observers test one subject Kappa statistics (2x2 table, two examiners and “Yes” or “No”) Correlation coefficients (two examiners and using a continuous scale or ordinal scale)

Reliability Intraobserver Reliability: test the observer rating in the same manner every time. Same assessment twice. You want to know how much the two variables are similar. What kind of statistics do you want to use? Answer: correlation analysis Time 2 Time 1

Validity Construct validity Criterion validity Content validity Convergent validity Discriminate validity Criterion validity Concurrent validity Predictive validity Content validity Representation validity Face validity

Validity Construct Validity: Whether the measurement tools (ex. questionnaire) measure the constructs being investigated  Factor analysis Convergent validity: Two (conceptually) similar constructs correspond with one another Discriminate validity (divergent validity): Two (conceptually) dissimilar constructs do not correspond with one another Questionnaire 1 Questionnaire 2 Factor 1 Factor 1 Factor 2 Factor 2 Factor 3 Factor 4

Validity Criterion Validity: a measure of how well a set of variables predicts an outcome A researcher is developing a new “behavior questionnaire” which tries to predicts a child’s behavior on the dental chair. Good Poor Outcome of the Q Outcome of the Q Behavior rate Behavior rate

Validity Content Validity: the extent to which the content of the test reflects the specific intended (theoretical) domain of content . E.g.: A semester or quarter exam that only includes content covered during the last six weeks is not a valid measure of the course's overall objectives -- it has very low content validity.

Questions?