Presentation is loading. Please wait.

Presentation is loading. Please wait.

Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University.

Similar presentations


Presentation on theme: "Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University."— Presentation transcript:

1 Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University Kristie Pretti-Frontczak Ph D., B2K Solutions Introduction Methods Discussion There is a need to conduct content validity studies in order to establish valid and reliable measures for new assessments. New measures must be appropriate for use with a targeted population. A content validity study provides information of the clarity and representativeness of each item as well as a preliminary analysis of factorial validity. The current study establishes the content validity of a pre-school math assessment for the forthcoming 3rd edition of the Assessment, Evaluation, and Programming Systems for Infants and Young Children (AEPS) to insure that the measure is appropriate for this population (Rubio et al., 2003). A review of the literature provided criteria for establishing content validity questions. Assessment item terms were obtained directly from the AEPS literature (2002). The skills assessed for instructional planning decisions include those that are functional, teachable, and relevant (Hosp & Adroin, 2008). Functional skills increase independence, teachable skills increase performance, and relevant skills are those skills that are essential to instructional planning decisions (Hosp & Adroin, 2008). Initial content validation of the AEPS established through the use of experts in the field via an online survey Six content experts were selected and invited to review the mathematics items of the AEPS. Three mathematical strands and related goals/objectives for skills and outcomes were rated in terms of their developmental sequence. Experts rated assessment items for five critical elements (i.e. functional, teachable, relevance, item criterion, and examples) using a four point Likert Scale. Interrater agreement (IRA) was calculated across experts to determine agreement of representativeness and clarity for each goal (Davis, 1992). Content validity index (CVI) was calculated to determine expert agreement on representativeness of the measures (Grant & Davis, 1997). Qualitative responses provided additional information to determine if measures are relevant and developmentally appropriate for the construct being measured. Feedback was summarized to provide clarification and direction to inform further development of practitioner instructional supports. Findings provided critical information for evaluating the new mathematical area of the forthcoming 3rd edition of the AEPS. Overall interrater agreement on survey items considered reliable was 98.6%. CVI for the entire measure was 97%. This was obtained by calculating the average CVI across all items. New measures should have a CVI of at least .80 (Lynn, 1986). The graphical display indicates agreement means across goals and objectives. Goals for ‘counting by ones from memory’ and ‘one to one correspondence’ had the lowest means for agreement however both goals exceeded the .80 criterion for expert agreement. Purpose Conclusion Due to increasing demands for rigor in educational research, it is imperative that assessments are developed using established criteria from the literature (Odom et al., 2005). This study examined the content validity of the widely used AEPS in order to determine whether assessment questions measure the domain intended to measure. Establishing content validity provides: clarification of each individual element of an assessment an indication for modification of individual assessment elements offers information on the representativeness of a measure. This is critical in the development of a valid and reliable instrument to assess the mathematical development of infants and young children. Results Establishing a high level of content validity adds a measure of objectivity in validating a new measure. Furthermore it is essential that new assessments are critically reviewed to determine whether the measure is relevant for the construct being measured. Results of this study indicate assessment items of early mathematic skills meet content validity criterion of .80 for the interrater agreement as well as for the CVI and are developmentally appropriate for evaluating math readiness. The high level of agreement across critical elements indicates that the Mathematics Area of the forthcoming 3rd edition of the AEPS demonstrates an overall high level of content validity. A subsequent pilot study should be conducted to evaluate the technical adequacy, usability and relevance of this new measure.


Download ppt "Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University."

Similar presentations


Ads by Google