Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measurement EDRS 6301 Summer 2001 Dr. Kielborn. Measurement n All measures contain error n Random error leads to unreliability n Systematic error leads.

Similar presentations


Presentation on theme: "Measurement EDRS 6301 Summer 2001 Dr. Kielborn. Measurement n All measures contain error n Random error leads to unreliability n Systematic error leads."— Presentation transcript:

1 Measurement EDRS 6301 Summer 2001 Dr. Kielborn

2 Measurement n All measures contain error n Random error leads to unreliability n Systematic error leads to invalidity n True Score = Obtained Score + Random Error

3 True Scores and Error Scores n A true score is the real and unchanging measure of the human characteristic n The error score is a positive or negative value that results from uncontrolled and unrealized variability in the measurement

4 Error n Error can be the result of the way we observe or test the individual n Observational (difficulty, broad test with few samples matching each concept; items unclear to the participant) n Procedural (inconsistent administration, recording, scoring or interpretation)

5 Error continued n Subject - (Individuals performing differently; reaction of participant to instrument or experiment)

6 Reliability n Reliability is consistency in measurement n Consistency is specific to the group being assessed n If there is consistency, there is confidence in the results

7 Reliability n The consistency of measurement n It is the extent to which observations/ experimental design can be replicated by another independent researcher n How consistently a data collection process measures whatever it measures

8 Sources of Unreliability n Faulty items and observations (tricky, ambiguous, or confusing) questions or format n Excessively difficult elements of the data collection process (participants guess) n Excessively easy elements of the data collection process n Inadequate number of observations or items

9 Sources of Unreliability n Accidentally focusing on multiple outcomes (all or most test items or questions in an interview/survey refer to the same characteristic) n Faulty scoring n Characteristics of the respondents (inability to concentrate, mood) n Faulty administration (room may be hot or cold or full of distractions)

10 Validity n Does it measure what we think it is measuring? n High validity - high amount of accuracy

11 Validity n Establish rapport n Minimize disruptions n Use unobtrusive methods for recording data n Triangulation - confirming results through more than one data source

12 Internal validity n The extent to which the results of a study are supported by the methodology. A well-controlled study is said to have high internal validity and “believable” conclusions.

13 Threats to Internal Validity n History - An event occurring between pre and post tests n Maturation - A change that occurs because a participant has grown older or gained experience n Instrumentation - A change that occurs because the testing procedures are unreliable or have altered unintentionally

14 Threats to Internal Validity n Testing - A change that occurs because the test has sensitized the participants to the nature of the research n Regression - The tendency of a very low score or a very high score to move toward the mean

15 References n Marshall, J. (2001). Assessment for educational improvement workshop, UWG, Carrollton, GA, June 16, 2001. n Schloss, P.J., & Smith, M.A. (1999) Conducting research. Upper Saddle River, NJ: Merrill. n Vockell, E.L., & Asher, J.W. (1995) Educational Research (2nd Ed.). Upper Saddle River, NJ: Merrill.


Download ppt "Measurement EDRS 6301 Summer 2001 Dr. Kielborn. Measurement n All measures contain error n Random error leads to unreliability n Systematic error leads."

Similar presentations


Ads by Google