Patterns in Child Outcomes Summary Data: Cornelia Taylor, Lauren Barton, Donna Spiker September 19-21, 2011 Measuring and Improving Child and Family Outcomes.

Slides:



Advertisements
Similar presentations
Using outcomes data for program improvement Kathy Hebbeler and Cornelia Taylor Early Childhood Outcome Center, SRI International.
Advertisements

Promoting Quality Child Outcomes Data Donna Spiker, Lauren Barton, Cornelia Taylor, & Kathleen Hebbeler ECO Center at SRI International Presented at: International.
Data, Now What? Skills for Analyzing and Interpreting Data
Welcome! Review of the National Part C APR Indicator 4 Family Data FFY 2011 ( ) Siobhan Colgan, ECTA, DaSy Melissa Raspa, ECTA.
Building a national system to measure child and family outcomes from early intervention Early Childhood Outcomes Center International Society on Early.
Researchers as Partners with State Part C and Preschool Special Education Agencies in Collecting Data on Child Outcomes Kathy Hebbeler, ECO at SRI International.
Presented at: Annual Conference of the American Evaluation Association Anaheim, CA - November 3, 2011 Performance Management in Action: A National System.
Update on Child Outcomes for Early Childhood Special Education Lynne Kahn ECO at UNC The Early Childhood Outcomes (ECO) Center The National Association.
2011 OSEP Leadership Mega Conference Collaboration to Achieve Success from Cradle to Career 2.0 The Results are In: Using Early Childhood Outcome Data.
Early Childhood Outcomes ECO Institute Kathy Hebbeler, ECO at SRI Robin Rooney ECO at FPG Prepared for the Office of Early Learning and School Readiness.
1 Measuring Child Outcomes: State of the Nation. 2 Learning objective: To gain new information about the national picture regarding measuring child outcomes.
Highs and Lows on the Road to High Quality Data American Evaluation Association Anaheim, CA November, 2011 Kathy Hebbeler and Lynne Kahn ECO at SRI International.
Presented at Annual Conference of the American Evaluation Association Anaheim, CA, November 2011 Lessons Learned about How to Support Outcomes Measurement.
Update on Part C Child Outcomes Lynne Kahn ECO at UNC The Early Childhood Outcomes (ECO) Center June 2011 Kathy Hebbeler ECO at SRI International.
The Results are In: Using Early Childhood Outcome Data Kathy Hebbeler Early Childhood Outcomes Center at SRI International August, 2011.
Presented at Division for Early Childhood National Harbor, Maryland November, Child Outcomes: What We Are Learning from National, State, and Local.
1 Trends in Child Outcomes (C-3 / B-7) and Family Outcomes (C-4) Analysis and Summary Report of All States’ Annual Performance Reports Christina.
Chapter 7 Correlational Research Gay, Mills, and Airasian
The Current Status of States' Early Childhood Outcome Measurement Systems Kathy Hebbeler, SRI International Lynne Kahn, FPG Child Dev Inst October 17,
Partnering with Local Programs to Interpret and Use Outcomes Data Delaware’s Part B 619 Program September 20, 2011 Verna Thompson & Tony Ruggiero Delaware.
But What Does It All Mean? Key Concepts for Getting the Most Out of Your Assessments Emily Moiduddin.
ENHANCE Update Research Underway on the Validity of the Child Outcomes Summary (COS) Process ECO Center Advisory Board Meeting March 8, 2012 Arlington,
Target Setting For Indicator #7 Child Outcomes WDPI Stakeholder Group December 16, 2009 Ruth Chvojicek Statewide Child Outcomes Coordinator 1 OSEP Child.
Kathy Hebbeler, ECO at SRI Lynne Kahn, ECO at FPG Christina Kasprzak, ECO at FPG Cornelia Taylor, ECO at SRI Lauren Barton, ECO at SRI National Picture.
Opening Doors to Success: The Status of State Transition Policies and Practices Beth Rous University of Kentucky Gloria Harbin University of North Carolina.
Child Outcomes Data Analysis Workshop Abby Winer, ECTA, DaSy Kathy Hebbeler, ECTA, DaSy Kathi Gillaspy, ECTA, DaSy September 8, 2014 Improving Data, Improving.
Provider Perceptions of the Child Outcomes Summary Process Lauren Barton and Cornelia Taylor October 27, 2012 Measuring and Improving Child and Family.
Quality Assurance: Looking for Quality Data 1 I know it is in here somewhere Presented by The Early Childhood Outcomes Center Revised January 2013.
1 Early Childhood and Accountability OSEP’s Project Director’s Meeting August 2006.
ND Early Childhood Outcomes Process Nancy Skorheim – ND Department of Public Instruction, Office of Special Education.
Early Childhood Outcomes Center Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems Dale Walker, Sara Gould,
Early Childhood Outcomes Center Data Workshop Getting the Tools You Need for Data-Informed Program Improvement Presented at the OSEP National Early Childhood.
Evaluating a Research Report
Data analysis for Program Improvement: Part 1 Kathy Hebbeler, ECO at SRI Cornelia Taylor, ECO at SRI.
Child Outcomes: Understanding the Requirements in order to Set Targets Presentation to the Virginia Interagency Coordination Council Infant &
Module 5 Understanding the Age-Expected Child Development, Developmental Trajectories and Progress Every day, we are honored to take action that inspires.
Looking for Patterns in Child Outcome Data – Examples from NYS New York State Department of Health Bureau of Early Intervention.
Overview to Measuring Early Childhood Outcomes Ruth Littlefield, NH Department of Education Lynne Kahn, FPG Child Dev Inst November 16,
PREVIEW: STATE CHILD OUTCOMES DATA QUALITY PROFILES National Webinar February 2014.
Cornelia Taylor, ECO at SRI Kathy Hebbeler, ECO at SRI National Picture –Child Outcomes for Early Intervention and Preschool Special Education October,
2012 OSEP Leadership Conference Leading Together to Achieve Success from Cradle to Career Child Outcomes for Early Intervention and Preschool Special Education:
National Picture – Child Outcomes for Early Intervention and Preschool Special Education Kathleen Hebbeler Abby Winer Cornelia Taylor August 26, 2014.
Understanding and Using Early Childhood Outcome (ECO) Data for Program Improvement Kansas Division for Early Childhood Annual Conference Feb. 23rd 2012.
Using COS Data to Inform Program Improvement at All Levels Every day, we are honored to take action that inspires the world to discover, love and nurture.
Understanding and Using Early Childhood Outcome (ECO) Data for Program Improvement TASN – KITS Fall 2012 Webinar August 31 st, 2012 Tiffany Smith Phoebe.
Presented at ECEA-SCASS Meeting Savannah, Georgia October, 2010 OSEP Initiatives on Early Childhood Outcomes Kathy Hebbeler Early Childhood Outcomes Center.
Measuring Child and Family Outcomes Conference Arlington, VA July 30, 2010 Looking for Patterns in Child Outcomes Data: How to Examine Data for Red Flags.
Summary Statements. The problem... Progress data included –5 progress categories –For each of 3 outcomes –Total of 15 numbers reported each year Too many.
What the data can tell us: Evidence, Inference, Action! 1 Early Childhood Outcomes Center.
How to Examine Your State's Family Outcomes Data: Asking and Answering Critical Questions Melissa Raspa (ECTA) Gary Harmon (NC) Alice Ridgway (CT) Lisa.
Strategies for Maintaining Data Quality Using Commercial Assessment Systems Nick Ortiz Colorado Department of Education Barb Jackson University of Nebraska.
Prepared for the Office of Head Start by ICF International School Readiness Goals: Using Data to Support Child Outcomes.
Measuring EC Outcomes DEC Conference Presentation 2010 Cornelia Taylor, ECO Christina Kasprzak, ECO/NECTAC Lisa Backer, MN DOE 1.
Child Outcomes Measurement and Data Quality Abby Winer Schachner & Kathleen Hebbeler International Society on Early Intervention Conference Stockholm,
Kathy Hebbeler, ECO at SRI Lynne Kahn, NECTAC and ECO at FPG
Quality Assurance: Looking for Quality Data
Child Outcomes Summary Process April 26, 2017
OSEP Project Directors Meeting
Kathy Hebbeler, ECO at SRI International AUCD Meeting Washington, DC
Validity of the Child Outcomes Summary Process:
Validity of the Child Outcomes Summary Process:
Integrating Outcomes Learning Community Call February 8, 2012
Using outcomes data for program improvement
Building Capacity to Use Child Outcomes Data to Improve Systems and Practices 2018 DEC Conference.
Early Childhood and Family Outcomes
Update from ECO: Possible Approaches to Measuring Outcomes
Researchers as Partners with State Part C and Preschool Special Education Agencies in Collecting Data on Child Outcomes Kathy Hebbeler, ECO at SRI International.
Measuring Part C and Early Childhood Special Education Child Outcomes
Using the Child and Family Outcomes Analysis Tools
Measuring Child and Family Outcomes Conference August 2008
Presentation transcript:

Patterns in Child Outcomes Summary Data: Cornelia Taylor, Lauren Barton, Donna Spiker September 19-21, 2011 Measuring and Improving Child and Family Outcomes Conference New Orleans, LA Analytic Approaches and Early Findings from the ENHANCE Project

Provide a brief update about ENHANCE Identify the purpose and approach of the state data study Describe some preliminary findings from initial states involved in the state data study Explain how other states could examine their own data in the same way as that presented Discuss any emerging implications for validity of the COS and for interpreting individual state data Today’s session

Origins of ENHANCE Need for Outcomes Data – Challenging to Collect

Origins of ENHANCE Need for Outcomes Data – Challenging to Collect COS Process Implemented > 40 States, Little Systematic Validation for Use in Accountability

Origins of ENHANCE Need for Outcomes Data – Challenging to Collect COS Process Implemented > 40 States, Little Systematic Validation for Use in Accountability ? Investigate… Learn

Early Evidence Belief in potential for COS process to be valid based on: Existing literature: team-based decision-making can be reliable and valid Existing literature: teams are effective in identifying individual children’s functioning so that they can plan and deliver appropriate services Early data from states: pilot sites, small n’s showing similarity in distributions, sensible patterns for subgroups Anecdotal data from trainers: participants reach decisions fairly easily and consistently

ENHANCE Project launched by the Early Childhood Outcomes Center (ECO) and SRI International Funded by the U.S. Dept. of Education, Institute for Educational Sciences – July 1, 2009 Series of studies designed to find out: –the conditions under which the Child Outcomes Summary (COS) Process produces meaningful and useful data for accountability and program improvement –the positive and/or negative impact of the COS process on programs and staff –what revisions to the form and/or the process are needed

Four ENHANCE Studies 1)Comparison with Child Assessments 2)Team Decision-Making 3)Provider Survey 4)State Data Study

Studies 1-3: 34 Project Data Collection Sites 17 Part C (Birth to 3) Illinois Maine Minnesota New Mexico Texas North Carolina 17 Part B Preschool (3-5) Illinois Maine Minnesota New Mexico Texas South Carolina 9

Goals Compare COS ratings to BDI-2, Vineland-II scores  Program Entry  Program Exit Compare conclusions from COS and assessments Sample 108 children - birth to children - 3 – 5 years Study Status Recruiting families About ½ of the sample enrolled See expected variability in sample (ages, disability types) and initial COS ratings/assessment scores Comparison with Child Assessments Study

Team Decision-Making Study Goals Learn more about the implementation of the COS process, including how the team reaches a decision about a rating and what is discussed. Do COS ratings assigned match the developmental level of the behaviors presented in the meeting? What is team understanding of outcomes and rating criteria? Sample 180 children each from Part C & Part B 619 ½ entry & ½ exit meetings Study Status Starting data collection now in about ½ the sites 19 videos received Expect to start coding videos Summer 2012

Goals What processes are being used to determine COS ratings? What is the impact of the COS process on practice? What have providers learned about the COS? What else would be helpful? Sample All providers in the program who participate in the COS process are invited to participate Study Status Developing survey content Survey expected Spring 2012 Provider Survey

Goals Analyze characteristics of COS data and relationships to other variables Look for consistency in patterns across states Examples of Questions Are patterns in COS data across states consistent with those predicted for high quality data? How are COS ratings related to hypothesized variables (e.g., disability type) and not to other variables (e.g., gender)? How are team variables related to COS ratings? Sample All valid COS data within the state for a reporting year states conducting all analyses Additional states sharing select analyses State Data Study

Refined procedures for gathering data tables by gathering data from a preliminary group of 6 states  Mostly states used procedures and generated data tables  A few provided formatted data files for SRI to analyze Beginning to analyze data from that preliminary group Soon will request data from other states in state data study and permission to use relevant data additional states have already analyzed and shared State Data Study: Status

State Data Study: Preliminary Data from 5 States 3 Part C (Birth to 3) 3 Part B Preschool (3-5)

How would these data analyses be conducted? States would send data to SRI annually – de-identified data files OR – aggregate output or reports from a set of requested analyses Examples of analyses include – the distributions of entry and exit COSF scores – relationships between outcomes – relationships between outcomes across time – relationships of outcome scores to other factors such as disability and gender

What data would I need to submit? Data collected at entry and exit from Part C and Part B 619 programs – COSF ratings – Additional child descriptors (e.g. race, gender, primary disability) – Variables that describe the setting or composition of the services

How will I submit data? De-identified data files – Templates developed in MS Excel – Submitted through a secure server Analyzed data – Table shells developed in MS Word and MS Excel – Submitted through secure server or ed

Who do I contact for more information? Cornelia Taylor (650)

Questions? Comments? Reactions?

Entry rating distributions

What should entry ratings look like?  Should they differ across outcomes?  Where do most of the ratings fall?  How much should the extremes of the scale be used ( 1 or 7)? Entry Rating Expectations

Entry Data Analysis The following data are from 3 Part C programs and 2 Part B programs All data are from 08 – 09 The data are entry cohorts – i.e. all children who entered during the FFY

Part C entry ratings across states; Outcome A

Part C entry ratings across states; Outcome B

Part C entry ratings across states; Outcome C

Outcome A – Average Entry Ratings

Outcome B – Average Entry Ratings

Outcome C – Average Entry Ratings

The difference in distributions between Part C and Part B are largest for Outcome C  Children in Part B enter with higher ratings Things to notice

Part C average ratings across outcomes

Variations in patterns across outcomes Things to Notice

More that ½ of all children enter with a COS rating of 3,4 or 5 across outcomes. An average of 12% of children enter at with the very lowest (1) or the very highest (7) across outcomes. The typical entry distribution has most children towards the middle of the distribution. Conclusions Across Part C and Part B

No Action Interpretation: You may be serving a population that is higher or lower functioning that other states. Pattern Check: if the distribution of entry scores in your state seems to be heavily weighted towards one end or the other of the distribution. Action Interpretation: Your providers may be systematically misunderstanding the definition of COS rating points.

Correlations between entry ratings Cross tabs of entry ratings by:  Program  Primary disability  Race/ethnicity Additional Entry Analysis

Exit distributions

Part C exit ratings across states; Outcome A

Part C exit ratings across states; Outcome B

Part C exit ratings across states; Outcome C

Part B exit ratings across states; Outcome A

Part B exit ratings across states; Outcome B

Part B exit ratings across states; Outcome C

Outcome A – Average Exit Ratings

Outcome B – Average Exit Ratings

Outcome C – Average Exit Ratings

Part C average exit scores across outcomes (state n = 3)

Part B average exit scores across outcomes (state n=3)

Variation in ratings across outcomes The exit distribution is shifted toward a higher rating than is the entry distribution For Part B, the average percent of children with a rating of 7 is much higher for Outcome C than for the other two outcomes Things to Notice

No Action Interpretation: You may be serving a lower functioning group than other states If this interpretation is true, it should also be apparent in your entry distribution Pattern Check: the distribution of exit scores in your state is not skewed towards the higher end of the rating scale. Action Interpretation: The children in your programs may not be making expected gains.

Choosing a metric for looking at paired distributions  Progress categories  Side-by-side entry exit comparisons Both of the above can be completed using the COS calculator 2.0 Entry-Exit Paired Distribution

How many points the child’s rating changed between entry and exit? What would you expect to see? Exit rating minus Entry rating Exit RatingEntry RatingExit rating – Entry rating

Part C exit score – entry score;

Part B exit score – entry score; 08-09

Things to Notice Most children’s ratings increase 1, 2, or 3 points, or they stay the same Very few children have ratings that decrease However, more children have ratings that decrease in Part C than in Part B

No Action Interpretation: Your programs are very effective and children make large gains (verify!). Pattern Check: if a large percentage of children in your state make large increases in their ratings Action Interpretation: Providers are not using the scale correctly and may be inflating exit ratings and/or deflating entry ratings.

Additional entry-exit analysis Correlations between entry and exit Progress categories by other variables (e.g., disability type, primary language)

The distribution of entry scores in your state seems to be heavily weighted towards one end or the other of the distribution The distribution of exit scores in your state is not skewed towards the higher end of the rating scale. A large percentage of children in your state make large increases in their ratings Summary of pattern checks

Find out more ENHANCE Website – ECO Center Website – Contact ENHANCE staff –