Part 2: Quantitative Methods

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

The Research Consumer Evaluates Measurement Reliability and Validity
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
VALIDITY AND RELIABILITY
Research Methodology Lecture No : 11 (Goodness Of Measures)
Reliability for Teachers Kansas State Department of Education ASSESSMENT LITERACY PROJECT1 Reliability = Consistency.
Testing What You Teach: Eliminating the “Will this be on the final
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Instrumentation Chapter Seven.
Table of Contents Exit Appendix Behavioral Statistics.
Concept of Measurement
Beginning the Research Design
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 8 Using Survey Research.
Agenda for January 25 th Administrative Items/Announcements Attendance Handouts: course enrollment, RPP instructions Course packs available for sale in.
Part 2: Quantitative Methods October 9, Validity Face –Does it appear to measure what it purports to measure? Content –Do the items cover the domain?
SOWK 6003 Social Work Research Week 5 Measurement By Dr. Paul Wong.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Validity, Reliability, & Sampling
Research Methods in MIS
RESEARCH METHODS IN EDUCATIONAL PSYCHOLOGY
Classroom Assessment A Practical Guide for Educators by Craig A
Chapter 5 Copyright © Allyn & Bacon 2008 This multimedia product and its contents are protected under copyright law. The following are prohibited by law:
Understanding Validity for Teachers
Quantitative Research
Questions to check whether or not the test is well designed: 1. How do you know if a test is effective? 2. Can it be given within appropriate administrative.
Chapter 4. Validity: Does the test cover what we are told (or believe)
CORRELATIO NAL RESEARCH METHOD. The researcher wanted to determine if there is a significant relationship between the nursing personnel characteristics.
Study announcement if you are interested!. Questions  Is there one type of mixed design that is more common than the other types?  Even though there.
Revision Sampling error
Collecting Quantitative Data
Measurement in Exercise and Sport Psychology Research EPHE 348.
Instrumentation.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Technical Adequacy Session One Part Three.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
Final Study Guide Research Design. Experimental Research.
 Collecting Quantitative  Data  By: Zainab Aidroos.
Evaluating a Research Report
Instrumentation (cont.) February 28 Note: Measurement Plan Due Next Week.
Tests and Measurements Intersession 2006.
Classroom Evaluation & Grading Chapter 15. Intelligence and Achievement Intelligence and achievement are not the same Intelligence and achievement are.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
Chapter 2. Surveys Survey Most widely used research method for sociologists Population Everyone with the characteristics a researcher wants to study.
Data Collection and Reliability All this data, but can I really count on it??
Measurement Validity.
Literature Reviews Ethics Sampling Sampling Plan Due 2/14.
Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Assessment. Workshop Outline Testing and assessment Why assess? Types of tests Types of assessment Some assessment task types Backwash Qualities of a.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect.
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
INSTRUMENTATION QUESTIONNAIRE EDU 702 RESEARCH METHODOLOGY ZUBAIDAH ABDUL GHANI ( ) NORELA ELIAS ( ) ROSLINA AHMED TAJUDDIN ( )
Measurement MANA 4328 Dr. Jeanne Michalski
Sampling (cont.) Instrumentation Measurement Plan Due 3/7.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
Evaluation, Testing and Assessment June 9, Curriculum Evaluation Necessary to determine – How the program works – How successfully it works – Whether.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because.
Concept of Test Validity
Classroom Assessment Validity And Bias in Assessment.
پرسشنامه کارگاه.
Reliability and Validity of Measurement
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

Part 2: Quantitative Methods October 2, 2006

Notice where stats fits into this ONE process What stats can and cannot do to address a research question.

Sampling

Target vs. Accessible Population High School Sports Officials Students enrolled in 5th grade in NM schools Albuquerque Residents The U.S. Electorate National Association of Sports Officials Membership New Mexico 5th graders Albuquerque Phone Book Registered Voters

volunteers Population Validity The Inferential size of sample Leap Simple Random Sampling volunteers Population Validity Systematic Sampling Sampling Random Sampling Stratified Error Processes Procedures Sampling Cluster Sampling Pros Selection Biased Convenience and and Sampling Sampling Cons Assignment

The Inferential Leap

Simple Random Sampling 1 092725 012157 827052 297980 625608 964134 2 104460 007903 484595 868313 274221 367181 3 676071 388003 266711 323324 044463 762803 4 881878 862385 203886 261061 096674 811548 5 534500 336348 086585 241740 581286 008435 6 094276 615776 242112 985859 075388 082003 1. Andrea 2. Tina 3. Paul 4. Wilbur 5. Sandra 6. Kathy 7. Jim 8. George 9. Emir 10. Becky 11. Sharon 12. Gladys 13. Jose 14. Bill 15. Sue 16. Erica 17. Aaron 18. Fred 19. Pam 20. Roger

Stratified Random Sampling 1 092725 012157 827052 297980 625608 964134 2 104460 007903 484595 868313 274221 367181 3 676071 388003 266711 323324 044405 762803 4 881878 862308 203886 261061 096674 811548 5 534500 336348 086585 241740 581286 008435 6 094276 615776 242112 985859 075388 082003 6. Paul 7. Wilbur 8. Jim 9. George 10. Emir 1. Andrea 2. Tina 3. Kathy 4. Sandra 5. Becky 6. Sharon 7. Gladys 8. Sue 9. Erica 10. Pam 1. Jose 2. Bill 3. Aaron 4. Fred 5. Roger

Systematic Sampling Say you have a target population that has a 100,000 members. And: A list is available. You need 1,000 cases for your sample. 100,000/1000 = 100. Select a random number from table. Then select every 100th case.

Cluster Sampling Naturally occurring groups. Multistage sampling. State, district, school, classroom, student. Randomly sample from one level then survey, interview, etc. Multistage sampling. Randomly select from one level. Then randomly select within that level.

Convenience Sampling Why convenient? Sample that is located near the researcher Connections with administrator or staff Researcher is familiar with the setting Data is already available Shortcomings of convenience samples?

Volunteers in Sampling How might volunteers differ? Children having parental permission More academically competent More popular with peers More physically attractive Less likely to smoke or use drugs More likely to be white More likely to come from two-parent household More likely to be involved in extracurricular activities Less likely to be socially withdrawn Less likely to be aggressive

Size of the Sample Bigger is (usually) better. Unless? How big is big? Power analysis. Practical issues. Attrition. Reliability. Cost/ benefit.

Correlation & Instrumentation Reliability and Validity

Correlation Coefficients Pearson product-moment correlation The relationship between two variables of degree. Positive: As one variable increases (or decreases) so does the other. Negative: As one variable increases the other decreases. Magnitude or strength of relationship -1.00 to +1.00 Correlation does not equate to causation

Positive Correlation

Negative Correlation

No Correlation

Correlations Thickness of scatter plot determines strength of correlation, not slope of line. For example see: http://noppa5.pc.helsinki.fi/koe/corr/cor7.html Remember correlation does not equate to causation.

Negative Correlation

Operationism vs. Essentialism According to Stanovich What are they? How do they differ?

Essentialist Operationism Like to argue about the meaning of our terms “What does the theoretical concept really mean?” Must have a complete and unambiguous understanding of the language involved. Operationism Link concepts to observable events that can be measured. Concepts in science related to a set of operations. Several slightly different tasks and behavioral events are used to converge on a concept.

Validity and Reliability Validity is an important consideration in the choice of an instrument to be used in a research investigation It should measure what it is supposed to measure Researchers want instruments that will allow them to make warranted conclusions about the characteristics of the subjects they study Reliability is another important consideration, since researchers want consistent results from instrumentation Consistency gives researchers confidence that the results actually represent the achievement of the individuals involved

Reliability Test-retest reliability Inter-rater reliability Parallel forms reliability Internal consistency (a.k.a. Cronbach’s alpha)

Validity Face Content Construct Does it appear to measure what it purports to measure? Content Do the items cover the domain? Construct Does it measure the unobservable attribute that it purports to measure?

Validity Criterion Predictive Concurrent Consequential

Types of validity (cont.) Here the instrument samples some and only of the construct

Types of validity Here the instrument samples all and more of the construct

The construct Here the instrument fails to sample ANY of the construct The instrument

The construct Here the instrument samples some but not all of the construct The instrument

Perfection!

Reliability and Validity

In groups of 3 to 4 Sampling Measurement What is the target population? What sampling procedure was used? Do you think the sample is representative? Why or why not? Measurement What types of reliability and validity evidence are provided? What else would you like to know?

Ways to Classify Instruments Who Provides the Information? Themselves: Self-report data Directly or indirectly: from the subjects of the study From informants (people who are knowledgeable about the subjects and provide this information)

Types of Researcher-completed Instruments Rating scales Interview schedules Tally sheets Flowcharts Performance checklists Observation forms

Excerpt from a Behavior Rating Scale for Teachers Instructions: For each of the behaviors listed below, circle the appropriate number, using the following key: 5 = Excellent, 4 = Above Average, 3 = Average, 2 = Below Average, 1 = Poor. A. Explains course material clearly. 1 2 3 4 5 B. Establishes rapport with students. C. Asks high-level questions. D. Varies class activities.

Excerpt from a Graphic Rating Scale Instructions: Indicate the quality of the student’s participation in the following class activities by placing an X anywhere along each line. Always Frequently Occasionally Seldom Never 1. Listens to teacher’s instructions. Always Frequently Occasionally Seldom Never 2. Listens to the opinions of other students. Always Frequently Occasionally Seldom Never 3. Offers own opinions in class discussions.

Sample Observation Form

Discussion Analysis Tally Sheet

Performance Checklist Noting Student Actions

Types of Subject-completed Instruments Questionnaires Self-checklists Attitude scales Personality inventories Achievement/aptitude tests Performance tests Projective devices

Example of a Self-Checklist

Example of Items from a Likert Scale

Example of the Semantic Differential

Pictorial Attitude Scale for Use with Young Children

Sample Items from a Personality Inventory

Sample Items from an Achievement Test

Sample Item from an Aptitude Test

Sample Items from an Intelligence Test

Item Formats Questions used in a subject-completed instrument can take many forms but are classified as either selection or supply items. Examples of selection items are: True-false items Matching items Multiple choice items Interpretive exercises Examples of supply items are: Short answer items Essay questions

Unobtrusive Measures Many instruments require the cooperation of the respondent in one way or another. An intrusion into an ongoing activity could be involved which causes a form of negativity within the respondent. To eliminate this, researchers use unobtrusive measures, data collection procedure that involve no intrusion into the naturally occurring course of events. In most cases, no instrument is used, however, good record keeping is necessary. They are valuable as supplements to the use of interviews and questionnaires, often providing a useful way to corroborate what more traditional data sources reveal.

Norm-Referenced vs. Criterion-Referenced Instruments All derived scores give meaning to individual scores by comparing them to the scores of a group. The group used to determine derived scores is called the norm group and the instruments that provide such scores are referred to as norm-referenced instruments. An alternative to the use of achievement or performance instruments is to use a criterion-referenced test. This is based on a specific goal or target (criterion) for each learner to achieve. The difference between the two tests is that the criterion referenced tests focus more directly on instruction.