Design Into Practice These lectures tie into Terre Blanche chapter 4 and 5 Now you have a design – how do you run the study? Many practical issues involved.

Slides:



Advertisements
Similar presentations
The meaning of Reliability and Validity in psychological research
Advertisements

Chapter 3 Introduction to Quantitative Research
Chapter 8 Flashcards.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Conceptualization and Measurement
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
1 Reliability in Scales Reliability is a question of consistency do we get the same numbers on repeated measurements? Low reliability: reaction time High.
VALIDITY AND RELIABILITY
Research Methodology Lecture No : 11 (Goodness Of Measures)
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Measurement Reliability and Validity
Designing Research Concepts, Hypotheses, and Measurement
Jamie DeLeeuw, Ph.D. 5/7/13. Reliability Consistency of measurement. The measure itself is dependable. ***A measure must be reliable to be valid!*** High.
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
Chapter 4 Validity.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Beginning the Research Design
Psych 231: Research Methods in Psychology
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
The Practice of Social Research
Reliability, Validity, & Scaling
1 Relational Research This lecture ties into chapter 17 of Terre Blanche What is a hypothesis? Statement of the relationship between 2 or more variables.
Instrumentation.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Technical Adequacy Session One Part Three.
The Psychology of the Person Chapter 2 Research Naomi Wagner, Ph.D Lecture Outlines Based on Burger, 8 th edition.
Step 3 of the Data Analysis Plan Confirm what the data reveal: Inferential statistics All this information is in Chapters 11 & 12 of text.
Chapter 2 Doing Social Psychology Research. Why Should You Learn About Research Methods?  It can improve your reasoning about real-life events  This.
Reliability & Validity
Measurement Validity.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Correlation & Regression Chapter 15. Correlation It is a statistical technique that is used to measure and describe a relationship between two variables.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Evaluating Survey Items and Scales Bonnie L. Halpern-Felsher, Ph.D. Professor University of California, San Francisco.
Measurement and Scaling
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Chapter 6 - Standardized Measurement and Assessment
1 Announcement Movie topics up a couple of days –Discuss Chapter 4 on Feb. 4 th –[ch.3 is on central tendency: mean, median, mode]
©2005, Pearson Education/Prentice Hall CHAPTER 6 Nonexperimental Strategies.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Psychological Experimentation The Experimental Method: Discovering the Causes of Behavior Experiment: A controlled situation in which the researcher.
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Consistency and Meaningfulness Ensuring all efforts have been made to establish the internal validity of an experiment is an important task, but it is.
Survey Methodology Reliability and Validity
Reliability and Validity
Chapter 2 Theoretical statement:
Lecture 5 Validity and Reliability
Concept of Test Validity
Associated with quantitative studies
Journalism 614: Reliability and Validity
Reliability & Validity
Human Resource Management By Dr. Debashish Sengupta
Research Methods Lesson 2 Validity.
Week 3 Class Discussion.
پرسشنامه کارگاه.
Analyzing Reliability and Validity in Outcomes Assessment Part 1
VALIDITY Ceren Çınar.
RESEARCH METHODS Lecture 18
Analyzing Reliability and Validity in Outcomes Assessment
Presentation transcript:

Design Into Practice These lectures tie into Terre Blanche chapter 4 and 5 Now you have a design – how do you run the study? Many practical issues involved in converting a design into a study to run

Conceptualisation We want to speak of abstract things “Intelligence”, “ability to cope”, “life satisfaction” We cannot research these things until we know exactly what they are Conceptualisation os the process of defining terms before research Once a thing has been conceptualised it is a “Construct”

Making conceptual definitions Begin with the lay understanding of the definition This will be understood by the subjects Then consult the experts (literature) Can be confusing, contradictory Create a preliminary definition “Test it” hypothetically Use thought experiments

The danger of reification You must not try to make constructs out of things that don’t exist – reification Careful grounding of the construct in extablished theory will prevent this Eg. Is Homophobia a construct? (is it not just prejudice)? Using reified constructs leads to empty, disconnected research

Operationalising variables Your design specifies the variables How do you measure the variables? How do you put a number to “intelligence”? How do you put a number to “capacity to cope”? We need to convert abstract variables into things which we can measure in the real world - operationalisation

Operationalisation (2) Turn your variable into a directly measureable thing Eg: How would you operationalise “success at university?” Often there are developed scales available If you operationalise badly, you end up not studying what you want Eg. Operationalising “success in career” by looking at the paycheque only

Measuring variables The operationalisation implies what to measure variable – how do you do it? If at all possible, use an established scale If no scale exists, construct one Scales must be valid and reliable The more of each of these properties, the better the scale Validity and reliability need to be sorted out before you run your study

Reliability in scales Reliability: stability of a measure over time If I measure you now and then in half an hour, do I get the same reading? Max reliability depends on the construct Some construct are unstable (eg. heart rate) Low reliability implies that other variables (“noise variables”) are being measured also Speaks of the “accuracy” of the scale

Ensuring reliability Reliability suffers when subjects have to interpret Everyone’s interpretation is slightly different Objective scales are always more reliable Allow little interpretation Using a fixed response format helps Eg. Multiple choice, Likert type Researcher does not have to interpret what the subject meant

Examples of response types Open ended item: Briefly describe your most frightening experience MCQ: The most frightening for me is A) Dogs B) Snakes C) Spiders D) None of the above

More examples Likert type: Circle the one which best describes your experience. I find dogs to be Not frightening at all Terrifyingly frightening

Validity in scales Validity: the degree to which a scales measures what it is supposed to Validity is subdivided into many types We will look at 2 most important Criterion Realted Validity Construct Validity

Criterion Related Validity The degree to which this scales matches other established scales By comparing to a scale known to be valid, you can be sure yours is valid Why make a new scale if one already exists? Maybe yours is quicker to do Maybe the established is not for group testing

How to check for criterion related validity This is done through a set of studies Run a sub-study in which you give the subjects your scale and the established one Run a correlation between the two scales If the correlation is statistically significant, your scale compares well to the established one. It is better to run several of these validity studies rather than just one.

Example: intelligence test An accepted test is the WAIS-R Very long to run (3 hours) You need something quicker (20 minutes), create the QIQ Create a test, select a group of subjects Make them take the WAIS-R and then the QIQ Compare the results (correlation) If they correlate well, your test is measureing intelligence

Construct Validity Construct validity: Does the scale actually measure the construct? Eg: measuring cranial circumference to measure intelligence Closely tied into the theory of the construct Most difficult to achieve, most important Measures lacking in construct validity are almost useless

How to check for construct validity Think abou it for a minute: How can you show that a scale truly measures what it claims to? How would you show that your depression scale has construct validity? Hint: Compare it not to scales of the same thing, but to similar and dissimilar things

The strategy Similar procedure to criterion related validity: Before your actual study, run a set of sub- studies to check your measure You will need 2 sets of studies Concurrent construct Validity Discriminant construct validity

Quick aside: direction of correlations Correlation: the degree of relationship between two variables, A and B Positive correlation: when A has a high value, B has a high value. When A has a low value, B has a low value Negative correlation: when A has a high value, B has a low value. When A has a low value, B has a high value

Correlations example Positive correlation: the relations ship between amount smoked and probability of heart disease Negative correlation: the relationship between amount of daily exercise and probability of heart disease No correlation: the relationship between whether you drink tea or coffee and the probability of heart disease

Concurrent validity Show that your scale relates positively to related concepts People who do are depressed will have many sad thoughts (mood conguency effect) Establish concurrent validity against several other constructs

Discriminant validity Show that your scale relates negatively to opposite concepts People who are depressed will have very low energy Establish discriminant validity against several other constructs

Ensuring construct validity Best way: be an expert on that construct Theory should tell you what things to include BUT: only if the theory is well-established! Second way: consult the experts/literature closely Stay with the uncontroversial aspects of that construct

Validity & reliability summary Aim: make sure that your variables are correctly operationalised Reliability: scale is stable over time/place Validity: scale is truly measuring the construct not something else

Validity & Reliability summary (2) Ensuring reliability: require verly little interpretation / increase objectivity Ensuring validity: base the measure closely on current understanding of construct Measuring validity: positive correlations with related scales, negative correlations with opposite scales