Analyzing COSF Data in Support of Validity Charles R. Greenwood July 25, 2007.

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Correlational and Differential Research
3 High expectations for every child
Data, Now What? Skills for Analyzing and Interpreting Data
 Degree to which inferences made using data are justified or supported by evidence  Some types of validity ◦ Criterion-related ◦ Content ◦ Construct.
Comparison of Half- and Full-Day Kindergarten on Kindergarten Achievement Jack B. Monpas-Huber, Ph.D. Director of Assessment and Student Information.
Researchers as Partners with State Part C and Preschool Special Education Agencies in Collecting Data on Child Outcomes Kathy Hebbeler, ECO at SRI International.
Developing a Framework for Examining Validity in State Accountability Systems Ellen Forte Fast edCount, LLC AERA Division H Paper Presentation April 15,
2014 AmeriCorps External Reviewer Training Assessing Need, Theory of Change, and Logic Model.
Indicator 7 Child Outcomes MAKING SENSE OF THE DATA June
Data Analysis for Assuring the Quality of your COSF Data 1.
Refresher: Background on Federal and State Requirements.
1 Measuring Child Outcomes: State of the Nation. 2 Learning objective: To gain new information about the national picture regarding measuring child outcomes.
CHILD OUTCOMES BASELINE AND TARGETS FOR INDICATOR 7 ON THE STATE PERFORMANCE PLAN State Advisory Panel for Exceptional Children November 12, 2009 January.
VALIDITY.
Using Data for Program Improvement Christina Kasprzak May, 2011.
Early Childhood Outcomes Center Using the Child Outcomes Summary Form February 2007.
1 Assuring the Quality of your COSF Data. 2 What factors work to improve the quality of your data? What factors work to lessen the quality of your data?
What influences English and Mathematics attainment at age 11? Evidence from the EPPSE project.
Athleticism, like intelligence, is many things
Partnering with Local Programs to Interpret and Use Outcomes Data Delaware’s Part B 619 Program September 20, 2011 Verna Thompson & Tony Ruggiero Delaware.
2014 AmeriCorps External Reviewer Training
Near East University Department of English Language Teaching Advanced Research Techniques Correlational Studies Abdalmonam H. Elkorbow.
Intelligence: Measuring Mental Performance Chapter 9 Dr. Pelaez.
1 Birth to 3 Child Outcomes Maryland’s Approach to Converting Assessment Data to OSEP Outcome Categories August 28, 2007 Deborah Metzger
Quality Assurance: Looking for Quality Data 1 I know it is in here somewhere Presented by The Early Childhood Outcomes Center Revised January 2013.
SPP Indicators B-7 and B-8: Overview and Results to Date for the Florida Prekindergarten Program for Children with Disabilities PreK Coordinators Meeting.
Patterns in Child Outcomes Summary Data: Cornelia Taylor, Lauren Barton, Donna Spiker September 19-21, 2011 Measuring and Improving Child and Family Outcomes.
Early Childhood Outcomes Center Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems Dale Walker, Sara Gould,
Instruction, Teacher Evaluation and Value-Added Student Learning Minneapolis Public Schools November,
Overview of the State Systemic Improvement Plan (SSIP)
Potential Alcohol Strategies March 20, 2008 Sheila Nesbitt.
Reliability REVIEW Inferential Infer sample findings to entire population Chi Square (2 nominal variables) t-test (1 nominal variable for 2 groups, 1 continuous)
Introduction: Medical Psychology and Border Areas
Early Childhood Outcomes Center1 Using Data for Program Improvement Christina Kasprzak, NECTAC/ECO Ann Bailey, NCRRC July 2010.
1 Measuring Child Outcomes: State of the Nation. 2 Learning objective: To gain new information about the national picture regarding measuring child outcomes.
Printed by The Aftercare and School Observation System: Characteristics of out-of-home contexts and young children’s behavior problems.
An Analysis of Three States Alignment Between Language Arts and Math Standards and Alternate Assessments Claudia Flowers Diane Browder* Lynn Ahlgrim-Delzell.
Clinton County RESA Early On ® Training & Technical Assistance Higher Education Introduction to: Developing Functional IFSP Outcomes to Meet the Unique.
CHAPTER 8 DEVELOPMENTAL CHANGES IN READING COMPREHENSION: IMPLICATIONS FOR ASSESSMENT AND INSTRUCTION AUTHORS: SUZANNE M. ADLOF, CHARLES A. PERFETTI, AND.
Child Health and School Readiness: The Significance of Health Literacy Laurie Martin, ScD, MPH Human Capital Research Collaborative Conference October.
The Impact of Student Self-e ffi cacy on Scientific Inquiry Skills: An Exploratory Investigation in River City, a Multi-user Virtual Environment Presenter:
What the data can tell us: Evidence, Inference, Action! 1 Early Childhood Outcomes Center.
C R E S S T / U C L A Validity Issues for Accountability Systems Eva L. Baker AERA April 2002 UCLA Graduate School of Education & Information Studies.
STATISTICS STATISTICS Numerical data. How Do We Make Sense of the Data? descriptively Researchers use statistics for two major purposes: (1) descriptively.
Michigan Assessment Consortium Common Assessment Development Series Module 16 – Validity.
Part 1: Productive internal district relationships Within - district & District - School Relationships (Effect size on changes in math and language achievement.
Strategies for Maintaining Data Quality Using Commercial Assessment Systems Nick Ortiz Colorado Department of Education Barb Jackson University of Nebraska.
Rationale for Inclusion Legal Mandates Head Start Individuals with Disabilities Education Act Americans with Disabilities Act Benefits for children with.
Sociocultural Influences Related to Language, Cognition, and Social Emotional Developmental Relationships Presenters: Kimberly Sharkins & Dr. James Ernest.
Quality Assurance: Looking for Quality Data
Mothers' Vocabulary and Autonomy-Granting Behaviors as Predictors of Gains in Children's Vocabulary Competence from Age 3 to Age 4 Sara L. Sohr-Preston.
Validity and Reliability
Brotherson, S., Kranzler, B., & Zehnacker, G.
OSEP Project Directors Meeting
Using Family Survey Data for Program Improvement
Early Childhood Outcomes Data (Indicator C3 and B7)
Integrating Outcomes Learning Community Call February 8, 2012
Clinical Assessment and Diagnosis
Using outcomes data for program improvement
Laura M. Sylke & David E. Szwedo James Madison University Introduction
Building Capacity to Use Child Outcomes Data to Improve Systems and Practices 2018 DEC Conference.
Child Care and Young Children’s Development
Researchers as Partners with State Part C and Preschool Special Education Agencies in Collecting Data on Child Outcomes Kathy Hebbeler, ECO at SRI International.
Using Family Survey Data for Program Improvement
Child Care and Young Children’s Development
Refresher: Background on Federal and State Requirements
Measuring Child and Family Outcomes Conference August 2008
Early Childhood Outcomes Data (Indicator C3 and B7)
Presentation transcript:

Analyzing COSF Data in Support of Validity Charles R. Greenwood July 25, 2007

Validity of Test Indicators Validity refers to the degree to which evidence and theory support the interpretation of test (COSF) scores entailed by the proposed uses of tests (COSF) (APA, AERA, NCME, 1999)

Validity of an Accountability System An accountability system can be said to have validity when evidence is judged to be strong enough to support inferences that: –The components of the system are aligned to the purposes, and are working in harmony to help the system accomplish those purposes –The system is accomplishing what was intended (and did not accomplish what was not intended) (Marion et al., 2002, pg. 105)

COSF Validity Questions and Evidence The Anchor Indicators Used in the COSF Process are mapped to the OSEP Outcomes via a cross-walk –Have the Anchor Indicators been cross- walked to the 3 OSEP outcomes? –Do the Anchor Indicators have evidence of validity and reliability?

COSF Validity Questions and Evidence The COSF Process Involves Multiple Participants and Sources of Evidence –Are multiple adults participating in the process? –Are parents participating in the process? –Are multiple sources of evidence being used? COSF Ratings Should Display Differences Between Children’s Performance –Is the distribution of COSF ratings normally distributed? –Are fewer children scored 1 and 7, and more children scored 3, 4, and 5?

COSF Validity Questions and Evidence OSEP Outcomes are Defined Functionally, Therefore, They Should be Highly Inter- correlated –Are the three outcomes highly inter-correlated? –Does each outcome contribute unique information? COSF ratings should be at least moderately (not strongly) correlated with the anchor-primary assessment measure –What is the concurrent validity correlation with the primary assessment measure? –Is there a linear, increasing relationship between ratings and mean test scores?

COSF Validity Questions and Evidence COSF Ratings Should Not Be Affected by Conditions in the State’s COSF Process –Are there differences due to use of different Anchor tests? –Are there differences by region or program? –Are there differences due to quality or intensity of training/fidelity in the COSF process?

COSF Validity Questions and Evidence Theoretically, We Might Expect COSF Ratings to be Influenced by Severity, Gender, Home Language Differences, etc. –Are there differences in COSF ratings due to type of disability? –Are boys rated lower than girls on the Social Outcome? (boys tend to have more behavior problems than girls) –Are English Language Learners rated lower on the Knowledge and Skills Outcome?

COSF Validity Questions and Evidence Theoretically, We Expect COSF Ratings Will Be Sensitive to Growth and Early Intervention Over Time –Are COSF exit rating distributions skewed to the right, indicating children scoring higher at exit compared to entry? –Are there gains in COSF ratings when comparing entry to exit? –Are these gains statistically significant, and what are the effect sizes?

COSF Validity Questions and Evidence Theoretically, We Expect Gains in COSF Exit Ratings to be Explained by Early Intervention Factors –Are gains in COSF ratings explained by length of service? –Are gains in COSF ratings explained by intervention/program quality features (e.g., models, evidence-based practice, etc.)? –Are gains in COSF ratings explained by family outcomes?