Download presentation
1
DENT 514: Research Methods
Masahiro Heima, DDS, PhD
2
Lecture 6: Methodology 2: Cross sectional, Longitudinal Study, Split Mouth Design, Crossover Designs, and Questionnaires (reliability and validity),
3
Cross-Sectional study vs. Longitudinal study
Cross-sectional study: an observational study which involves a observation of variables from subjects at one specific point in time. We can see differences in the variables between groups. Longitudinal study: an observational study which involves repeated observations of the same variables from the same subjects over a period of time. We can see differences in the variables between groups as well as time.
4
Split Mouth Design One side of the mouth is the “control” side and the other is the “study” side (within-subject designs) This design removes all differences between subjects. Carry-over Efects: one side affects another side mouth (ex. Fluoride applied on one side of the mouth can affect the other side of the mouth.) Counterbalancing: necessary in order to control the carryover/order effect, through randomization.
5
Dental Caries Examination
Isaacs (Caries Research, 1999) Subjects: 150 children, 9‐12‐years‐old (high risk population) Methods: Split mouth Half of the mouth was examined with loupes and the other half with explorer. DFS was counted at baseline and in 8 months. Results: Loupe side = 2.1 fold increase Explorer side = 4.5 fold increase
6
Carryover Effects: needs “washout”
Crossover design Each participant received a control condition and an intervention. Carry-over effects (“Order" effects, “Learning" effect) “Washout:” Any carryover effect is washed out as a consequence of more than sufficient time between visit 1 and 2 Carryover Effects: needs “washout” Visit 1 Visit 2 Group 1 Control Intervention Group 2 Intervention Control
7
Surveys(Questionnaires)
“Easy” Relatively low cost Methods: Face to face, Telephone, Letter, (Text), Web based and mixed Sampling method (possible bias) Sample size (Large) Response rate (at least %: “adequate”) Reliability and Validity
8
Methodology Development of questionnaire
PROMIS® Instrument Development and Validation Scientific Standards Version 2.0 (revised May 2013) Design of questionnaire The Total Design Method/Tailored Design Method (TDM) (by Dillman): to improve quality of response and to increase response rate
9
Dental visit In the past 12 months, how many times did you see a dentist? Last year, how many times did you see a dentist?
10
Questionnaires (reliability and validity)
Reliability (consistency) and validity (accuracy) Reliability is the degree to which an assessment tool produces stable and consistent results. Validity refers to how well a test measures what it is purported to measure. High reliability Low validity Low reliability High validity Low reliability Low validity High reliability High validity
11
Reliability For tools (questionnaire, assessment, evaluation, etc.)
Stability Reliability (test-retest) Internal Consistency Reliability Parallel forms reliability For researchers (the reliability of researchers) Interobserver Reliability (Interrater Reliability) Intraobserver Reliability
12
Reliability Stability Reliability (test-retest): To measure stability of instruments over time-Same test different time. It is used when the interest or phenomenon is stable/unchanging. Works for the Trait Anxiety Inventory ,which measures how easily a person becomes anxious. However, does not work for the State Anxiety Inventory, which measures the level of anxiety at various points in time. You want to know how much the two variables (first time and second time) are similar. What kind of statistics do you want to use? Answer: a correlation analysis Time 2 Time 1
13
Reliability Internal Consistency Reliability (Inter-item Reliability): tests questions designed to measure the same concept. (Cronbach’s alpha) (Split-half Reliability test is also used) Example How long do you study? (less than 1 min,…) Do you study hard? (yes or no) Do you discuss your questions with your professor? (yes or no) Do you drive a car? (yes or no) This is different.
14
Reliability Parallel forms reliability: tests two measurements which have identical (similar) concepts-different forms at the same time e.g.: Anxiety questionnaire and Fear questionnaire Comparing the new developing form with the “standard” form You want to know how much the two variables are similar. What kind of statistics do you want to use? Answer: correlation analysis Variable 2 Variable 1
15
Reliability Interobserver Reliability (Interrater Reliability): tests observers (raters) of a research team measuring the same thing. Addresses the consistency of the implementation of a rating system. Multiple observers test one subject Kappa statistics (2x2 table, two examiners and “Yes” or “No”) Correlation coefficients (two examiners and using a continuous scale or ordinal scale)
16
Reliability Intraobserver Reliability:
test the observer rating in the same manner every time. Same assessment twice. You want to know how much the two variables are similar. What kind of statistics do you want to use? Answer: correlation analysis Time 2 Time 1
17
Validity Construct validity Criterion validity Content validity
Convergent validity Discriminate validity Criterion validity Concurrent validity Predictive validity Content validity Representation validity Face validity
18
Validity Construct Validity: Whether the measurement tools (ex. questionnaire) measure the constructs being investigated Factor analysis Convergent validity: Two (conceptually) similar constructs correspond with one another Discriminate validity (divergent validity): Two (conceptually) dissimilar constructs do not correspond with one another Questionnaire 1 Questionnaire 2 Factor 1 Factor 1 Factor 2 Factor 2 Factor 3 Factor 4
19
Validity Criterion Validity: a measure of how well a set of variables predicts an outcome A researcher is developing a new “behavior questionnaire” which tries to predicts a child’s behavior on the dental chair. Good Poor Outcome of the Q Outcome of the Q Behavior rate Behavior rate
20
Validity Content Validity: the extent to which the content of the test reflects the specific intended (theoretical) domain of content . E.g.: A semester or quarter exam that only includes content covered during the last six weeks is not a valid measure of the course's overall objectives -- it has very low content validity.
21
Questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.