Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reading Assessments for Elementary Schools Tracey E. Hall Center for Applied Special Technology Marley W. Watkins Pennsylvania State University Frank.

Similar presentations


Presentation on theme: "Reading Assessments for Elementary Schools Tracey E. Hall Center for Applied Special Technology Marley W. Watkins Pennsylvania State University Frank."— Presentation transcript:

1

2 Reading Assessments for Elementary Schools Tracey E. Hall Center for Applied Special Technology Marley W. Watkins Pennsylvania State University Frank C. Worrell University of California, Berkeley

3 REVIEW: Major Concepts Nomothetic and Idiographic Samples Norms Standardized Administration Reliability Validity

4 Nomothethic Relating to the abstract, the universal, the general. Nomothetic assessment focuses on the group as a unit. Refers to finding principles that are applicable on a broad level. For example, boys report higher math self-concepts than girls; girls report more depressive symptoms than boys..

5 Idiographic Relating to the concrete, the individual, the unique Idiographic assessment focuses on the individual student What type of phonemic awareness skills does Joe possess?

6 Populations and Samples I A population consists of all the representatives of a particular domain that you are interested in The domain could be people, behavior, curriculum (e.g. reading, math, spelling,...

7 Populations and Samples II A sample is a subgroup that you actually draw from the population of interest Ideally, you want your sample to represent your population –people polled or examined, test content, manifestations of behavior

8 Samples A random sample is one in which each member of the population had an equal and independent chance of being selected. Random samples are important because the idea is to have a sample that represents the population fairly; an unbiased sample. A sample can be used to represent the population. Sampling in which elements are drawn according to some known probability structure. Probability samples are typically used in conjunction with subgroups (e.g., ethnicity, socioeconomic status, gender).

9 Norms I Norms are examples of how the “average” individual performs. Many of the tests and rating scales that are used to compare children in the US are norm-referenced. –An individual child’s performance is compared to the norms established using a representative sample.

10 Norms II For the score on a normed instrument to be valid, the person being assessed must belong to the population for which the test was normed If we wish to apply the test to another group of people, we need to establish norms for the new group

11 Norms III To create new norms, we need to do a number of things: –Get a representative sample of new population –Administer the instrument to the sample in a standardized fashion. –Examine the reliability and validity of the instrument with that new sample –Determine how we are going to report on scores and create the appropriate tables

12 Standardized Administration All measurement has error. Standardized administration is one way to reduce error due to examiner/clinician effects. For example, consider these questions with different facial expressions and tone: Please define a noun for me :-) DEFINE a noun if you can ? :- (

13 Normal Curve Many distributions of human traits form a normal curve Most cases cluster near middle, with fewer individuals at extremes; symmetrical We know how the population is distributed based on the normal curve

14 Ways of Reporting Scores Mean, standard deviation Distribution of scores –68.26% ± 1; 95.44 ± 2; 99.72 ±3 Stanines (1, 2, 3, 4, 5, 6, 7, 8, 9) Standard scores - linear transformations of scores, but easier to interpret Percentile ranks* Box and Whisker Plots*

15 Percentiles A way of reporting where a person falls on a distribution. The percentile rank of a score tells you how many people obtained a score equal to or lower than that score. Box and whisker plots are visual displays or graphic representations of the shape of a distribution using percentiles.

16

17 Correlation We need to understand the correlation coefficient to understand the manual The correlation coefficient, r, quantifies the relationship between two sets of scores A correlation coefficient can have a range from -1 to + 1 –Zero means the two sets of scores are not related. –One means the two sets of scores are identical (a perfect correlation)

18 Correlation 2 Correlations can be positive or negative. A + correlation tells us that as one set of scores increases, the second set of scores also increases. they can be negative. Examples? A negative correlation tells us that as one set of scores increases, the other set decreases. Think of some examples of variables with negative r’s. The absolute value of a correlation indicates the strength of the relationship. Thus.55 is equal in strength to -.55.

19 How would you describe the correlations shown by these charts?

20 Reliability Reliability addresses the stability, consistency, or reproducibility of scores. –Internal consistency –Split half, Cronbach’s alpha –Test-retest –Parallel/Alternate forms –Inter-rater

21 Validity Validity addresses the accuracy or truthfulness of scores. Are they measuring what we want them to? –Content –Criterion - Concurrent –Criterion - Predictive –Construct –Face –(Cash)

22 Content Validity Is the assessment tool representative of the domain (behavior, curriculum) being measured? An assessment tool is scrutinized for its (a) completeness or representativeness, (b) appropriateness, (c) format, and (d) bias –E.g., MSPAS

23 Criterion-related Validity What is the correlation between our instrument, scale, or test and another variable that measures the same thing, or measures something that is very close to ours? In concurrent validity, we compare scores on the instrument we are validating to scores on another variable that are obtained at the same time. In predictive validity, we compare scores on the instrument we are validating to scores on another variable that are obtained at some future time.

24 Construct Validity Overarching construct: Is the instrument measuring what it is supposed to? –Dependent on reliability, content and criterion-related validity. We also look at some other types of validity some times –Convergent validity: r with similar construct –Discriminant validity: r with unrelated construct –Structural validity: What is the structure of the scores on this instrument?

25 Elementary Normative Sample Stratified by educational region Males and females represented equally. School, class, and individuals chosen at random. Final sample consists of 700 students (50% female).

26 p. 2

27

28

29 p. 3

30

31

32 Measures First and Second Year/Infants 1 and 2 –Mountain Shadows Phonemic Awareness Scale (MS- PAS) - group administered. –Individual Phonemic Analysis Second Year/Infant 2 to Standard 5 –Oral Reading Fluency Standards 1 and 2 –The Cloze Procedure

33 Assessment Determine starting point Analyze Errors Monitor Progress Modify Instruction Instructional Delivery Secure student attention Pace instruction appropriately Monitor student performance Provide feedback Instructional Design Determine Content Select Language of Instruction Select examples Schedule scope and sequence Provide for cumulative review Initial Evaluation Archival Assessment Diagnostic Assessments Formal Standardized Measures Madigan, Hall, & Glang(1997) Assessment Instruction Cycle


Download ppt "Reading Assessments for Elementary Schools Tracey E. Hall Center for Applied Special Technology Marley W. Watkins Pennsylvania State University Frank."

Similar presentations


Ads by Google