Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University.

Slides:



Advertisements
Similar presentations
Ed-D 420 Inclusion of Exceptional Learners. CAT time Learner-Centered - Learner-centered techniques focus on strategies and approaches to improve learning.
Advertisements

Studying at postgraduate level Student Services Get Ahead 2012 Angela Dierks.
Iowa Assessment Update School Administrators of Iowa November 2013 Catherine Welch Iowa Testing Programs.
NECTAC Webinar Series on Early Identification and Part C Eligibility Session 2: A Rigorous Definition of Developmental Delay March 10, 2010 Steven Rosenberg,
Greenville Technical College Assessing and Developing Student Computing Technology Skills September 19, 2012 Dr. Lenna Young, Mark Krawczyk, and Mary Locke.
Developing Rubrics Presented by Frank H. Osborne, Ph. D. © 2015 EMSE 3123 Math and Science in Education 1.
Factor Analysis There are two main types of factor analysis:
1 A Comparison of Traditional, Videoconference-based, and Web-based Learning Environments A Dissertation Proposal by Ming Mu Kuo.
Principles of High Quality Assessment
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Jan Weiss, PT, DHS, CLT-LANA
University of Hertfordshire, Division of Sports Science, Sports Studies, and Sports Therapy, College Lane, Hatfield, Hertfordshire, AL10 9AB.
Understanding Validity for Teachers
Test Validity S-005. Validity of measurement Reliability refers to consistency –Are we getting something stable over time? –Internally consistent? Validity.
Linguistic Demands of Preschool Cognitive Assessments Glenna Bieno, Megan Eparvier, Anne Kulinski Faculty Mentor: Mary Beth Tusing Method We employed three.
Proposal Writing.
Measurement and Data Quality
Dr Sally Boa and Dr Joan Murphy Professor Pam Enderby Funded by NHS Education Scotland Conducted by Talking Mats Limited © Talking Mats Ltd 2014.
DEVELOPING ALGEBRA-READY STUDENTS FOR MIDDLE SCHOOL: EXPLORING THE IMPACT OF EARLY ALGEBRA PRINCIPAL INVESTIGATORS:Maria L. Blanton, University of Massachusetts.
Topic 4: Formal assessment
Ch 6 Validity of Instrument
OUTCOMES ASSESSMENT VIA RUBRICS: A PILOT STUDY IN AN MIS COURSE AS A PRECOURSOR TO A MULTIPLE MEASURE APPROACH By W. R. Eddins, York College of Pennsylvania.
Office of Institutional Research, Planning and Assessment January 24, 2011 UNDERSTANDING THE DIAGNOSTIC GUIDE.
Background Scientifically based measures are key to monitoring and advancing students’ learning. The use of such assessments has been called for in national.
Standardization and Test Development Nisrin Alqatarneh MSc. Occupational therapy.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Classroom Assessment A Practical Guide for Educators by Craig A
 Based on progressions points - learning statements that indicate what a student should be able to achieve at each level.  No set assessment, the way.
The inspection of local area responsibilities for disabled children and young people and those who have special educational needs Charlie Henry HMI National.
The Analysis of the quality of learning achievement of the students enrolled in Introduction to Programming with Visual Basic 2010 Present By Thitima Chuangchai.
Scientific Validation Of A Set Of Instruments Measuring Fidelity Of Implementation (FOI) Of Reform-based Science And Mathematics Instructional Materials.
CCSSO Criteria for High-Quality Assessments Technical Issues and Practical Application of Assessment Quality Criteria.
Issues in Selecting Assessments for Measuring Outcomes for Young Children Issues in Selecting Assessments for Measuring Outcomes for Young Children Dale.
Reliability vs. Validity.  Reliability  the consistency of your measurement, or the degree to which an instrument measures the same way each time it.
THE MEASUREMENT OF USER INFORMATION SATISFACTION (BLAKE IVES ET.AL) Presented by: IRA GERALDINA
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Chapter 16 Early Childhood Assessment. Assessment of Young Children Establish family priorities Familiar environments Assessments should Provide information.
An Analysis of Three States Alignment Between Language Arts and Math Standards and Alternate Assessments Claudia Flowers Diane Browder* Lynn Ahlgrim-Delzell.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Assessing Responsiveness of Health Measurements Ian McDowell, INTA, Santiago, March 20, 2001.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Chapter 14: Affective Assessment
Designing New Programs Design & Chronological Perspectives (Presentation of Berk & Rossi’s Thinking About Program Evaluation, Sage Press, 1990)
Classroom Assessment Chapters 4 and 5 ELED 4050 Summer 2007.
Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting.
1 Chapter 22 Assessing Motor Behavior © Gallahue, D.L., & Ozmun, J.C.. Understanding Motor Development. McGraw-Hill.
“Excuse Me Sir, Here’s Your Change”
Chapter 15 Early Childhood Assessment
EVALUATING EPP-CREATED ASSESSMENTS
Classroom Assessments Checklists, Rating Scales, and Rubrics
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Our Children, Our Community, Our Change
Evaluating Student-Teachers Using Student Outcomes
Partnership for Practice
Classroom Assessment A Practical Guide for Educators by Craig A
Lecture 5 Validity and Reliability
Chapter 6: Checklists, Rating Scales & Rubrics
Oleh: Beni Setiawan, Wahyu Budi Sabtiawan
Test Design & Construction
Test Validity.
Classroom Assessments Checklists, Rating Scales, and Rubrics
A Meta-Analysis of Video Modeling Interventions that Teach Employment Related Skills to Individuals with Autism Carol Sparber, M.Ed. Intervention Specialist.
META ANALYSIS OF VIDEO MODELING INTERVENTIONS
Using Data To Learn About Our Young Children: Mapping Early Development Instrument (EDI) Results in Miami-Dade County. Zafreen Jaffery, Ed.D.
Assessment Literacy: Test Purpose and Use
TESTING AND EVALUATION IN EDUCATION GA 3113 lecture 1
HIGHLIGHTS FOR READERS
Doug Glasshoff Greg Gaden Darin Kelberlau
Chapter 3: How Standardized Test….
Presentation transcript:

Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University Kristie Pretti-Frontczak Ph D., B2K Solutions Introduction Methods Discussion There is a need to conduct content validity studies in order to establish valid and reliable measures for new assessments. New measures must be appropriate for use with a targeted population. A content validity study provides information of the clarity and representativeness of each item as well as a preliminary analysis of factorial validity. The current study establishes the content validity of a pre-school math assessment for the forthcoming 3rd edition of the Assessment, Evaluation, and Programming Systems for Infants and Young Children (AEPS) to insure that the measure is appropriate for this population (Rubio et al., 2003). A review of the literature provided criteria for establishing content validity questions. Assessment item terms were obtained directly from the AEPS literature (2002). The skills assessed for instructional planning decisions include those that are functional, teachable, and relevant (Hosp & Adroin, 2008). Functional skills increase independence, teachable skills increase performance, and relevant skills are those skills that are essential to instructional planning decisions (Hosp & Adroin, 2008). Initial content validation of the AEPS established through the use of experts in the field via an online survey Six content experts were selected and invited to review the mathematics items of the AEPS. Three mathematical strands and related goals/objectives for skills and outcomes were rated in terms of their developmental sequence. Experts rated assessment items for five critical elements (i.e. functional, teachable, relevance, item criterion, and examples) using a four point Likert Scale. Interrater agreement (IRA) was calculated across experts to determine agreement of representativeness and clarity for each goal (Davis, 1992). Content validity index (CVI) was calculated to determine expert agreement on representativeness of the measures (Grant & Davis, 1997). Qualitative responses provided additional information to determine if measures are relevant and developmentally appropriate for the construct being measured. Feedback was summarized to provide clarification and direction to inform further development of practitioner instructional supports. Findings provided critical information for evaluating the new mathematical area of the forthcoming 3rd edition of the AEPS. Overall interrater agreement on survey items considered reliable was 98.6%. CVI for the entire measure was 97%. This was obtained by calculating the average CVI across all items. New measures should have a CVI of at least .80 (Lynn, 1986). The graphical display indicates agreement means across goals and objectives. Goals for ‘counting by ones from memory’ and ‘one to one correspondence’ had the lowest means for agreement however both goals exceeded the .80 criterion for expert agreement. Purpose Conclusion Due to increasing demands for rigor in educational research, it is imperative that assessments are developed using established criteria from the literature (Odom et al., 2005). This study examined the content validity of the widely used AEPS in order to determine whether assessment questions measure the domain intended to measure. Establishing content validity provides: clarification of each individual element of an assessment an indication for modification of individual assessment elements offers information on the representativeness of a measure. This is critical in the development of a valid and reliable instrument to assess the mathematical development of infants and young children. Results Establishing a high level of content validity adds a measure of objectivity in validating a new measure. Furthermore it is essential that new assessments are critically reviewed to determine whether the measure is relevant for the construct being measured. Results of this study indicate assessment items of early mathematic skills meet content validity criterion of .80 for the interrater agreement as well as for the CVI and are developmentally appropriate for evaluating math readiness. The high level of agreement across critical elements indicates that the Mathematics Area of the forthcoming 3rd edition of the AEPS demonstrates an overall high level of content validity. A subsequent pilot study should be conducted to evaluate the technical adequacy, usability and relevance of this new measure.