Are your C4 data reflective of the families you serve? Joy Markowitz, Director Jean Dauphinee, TA Specialist Measuring Child and Family Outcomes Conference,

Slides:



Advertisements
Similar presentations
Preschool Special Education A Review of State Performance Indicators and The Child Outreach Network.
Advertisements

Updates on APR Reporting for Early Childhood Transition (Indicators C-8 and B-12)
Survey Methodology Nonresponse EPID 626 Lecture 6.
HOW TO EXAMINE AND USE FAMILY SURVEY DATA TO PLAN FOR PROGRAM IMPROVEMENT Levels of Representativeness: SIOBHAN COLGAN, ECO AT FPG BATYA ELBAUM, DAC -
Welcome! Review of the National Part C APR Indicator 4 Family Data FFY 2011 ( ) Siobhan Colgan, ECTA, DaSy Melissa Raspa, ECTA.
Charlotte Y. Alverson, Ed. S. Secondary Transition State Planning Institute Charlotte, NC May 8, 2008 Blueprint for Success: Helpful Hints for Writing.
Researchers as Partners with State Part C and Preschool Special Education Agencies in Collecting Data on Child Outcomes Kathy Hebbeler, ECO at SRI International.
Refresher: Background on Federal and State Requirements.
State Directors Conference Boise, ID, March 4, 2013 Cesar D’Agord Regional Resource Center Program WRRC – Western Region.
1 Measuring Child Outcomes: State of the Nation. 2 Learning objective: To gain new information about the national picture regarding measuring child outcomes.
Presented at Annual Conference of the American Evaluation Association Anaheim, CA, November 2011 Lessons Learned about How to Support Outcomes Measurement.
The Results are In: Using Early Childhood Outcome Data Kathy Hebbeler Early Childhood Outcomes Center at SRI International August, 2011.
Presented at Division for Early Childhood National Harbor, Maryland November, Child Outcomes: What We Are Learning from National, State, and Local.
Early Childhood Outcomes Center Using the Child Outcomes Summary Form February 2007.
1 Trends in Child Outcomes (C-3 / B-7) and Family Outcomes (C-4) Analysis and Summary Report of All States’ Annual Performance Reports Christina.
1 Overview of IDEA/SPP Early Childhood Transition Requirements Developed by NECTAC for the Early Childhood Transition Initiative (Updated February 2010)
Pouring a Foundation for Program Improvement with Quality SPP/APR Data OSEP’s message regarding Indicators 1, 2, 13 and 14 - data collection and improvement.
Partnering with Local Programs to Interpret and Use Outcomes Data Delaware’s Part B 619 Program September 20, 2011 Verna Thompson & Tony Ruggiero Delaware.
Aspects of the National Health Interview Survey (NHIS) Chris Moriarity National Conference on Health Statistics August 16, 2010
October REPORTING REQUIREMENTS FOR PRESCHOOL EDUCATIONAL ENVIRONMENTS.
1 Early Childhood and Accountability OSEP’s Project Director’s Meeting August 2006.
SPP Indicators B-7 and B-8: Overview and Results to Date for the Florida Prekindergarten Program for Children with Disabilities PreK Coordinators Meeting.
1 What did Connecticut do?. 2 The Basics A group of people who hold stakes met to give the lead agency suggestions. We chose the NCSEAM survey and we.
Jacqueline Wilson Lucas, B.A., MPH Renee Gindi, Ph.D. Division of Health Interview Statistics Presented at the 2012 National Conference on Health Statistics.
1 Using a Statewide Evaluation Tool for Child Outcomes & Program Improvement Terry Harrison, Part C Coordinator Susan Evans, Autism Project Specialist.
Infant & Toddler Connection of Virginia Results of FFY 2007 Monitoring Indicators For The Annual Performance Report & State Performance Plan.
Early Childhood Outcomes Center Data Workshop Getting the Tools You Need for Data-Informed Program Improvement Presented at the OSEP National Early Childhood.
How to Explain the Numbers: Helping Staff, Parents, and Other Stakeholders Understand the Results of the NCSEAM Surveys for Part C and 619 Batya Elbaum,
National Consortium On Deaf-Blindness Families Technical Assistance Information Services and Dissemination Personnel Training State Projects.
1 Measuring Child Outcomes: State of the Nation. 2 Learning objective: To gain new information about the national picture regarding measuring child outcomes.
Panel Study of Entrepreneurial Dynamics Richard Curtin University of Michigan.
An Introduction to the State Performance Plan/Annual Performance Report.
National High School Center Summer Institute What’s the Post-School Outcomes Buzz? Jane Falls Coordinator, National Post-School Outcomes Center Washington,
Cornelia Taylor, ECO at SRI Kathy Hebbeler, ECO at SRI National Picture –Child Outcomes for Early Intervention and Preschool Special Education October,
National Picture – Child Outcomes for Early Intervention and Preschool Special Education Kathleen Hebbeler Abby Winer Cornelia Taylor August 26, 2014.
Understanding and Using the Results from the NCSEAM Family Survey Batya Elbaum, Ph.D. NCSEAM Measuring Child and Family Outcomes NECTAC National TA Meeting.
The Center for IDEA Early Childhood Data Systems Sharing is Good for Kids Kathy Hebbeler NERCC Part C Coordinators Meeting April 24, 2013.
CT Speech Language Hearing Association March 26, 2010.
Presented by the Early Childhood Transition Program Priority Team August 11, 2010 Updated September 2010.
Why Collect Outcome Data? Early Childhood Outcomes Center.
What the data can tell us: Evidence, Inference, Action! 1 Early Childhood Outcomes Center.
JACK O’CONNELL State Superintendent of Public Instruction Improving Special Education Services November 2010 Sacramento, CA SPP/APR Update.
Parent and National TA Perspectives on EC Outcomes Connie Hawkins, Region 2 PTAC Kathy Hebbeler, ECO at SRI Lynne Kahn ECO at FPG and NECTAC.
October REPORTING REQUIREMENTS FOR PRESCHOOL EDUCATIONAL ENVIRONMENTS.
How to Examine Your State's Family Outcomes Data: Asking and Answering Critical Questions Melissa Raspa (ECTA) Gary Harmon (NC) Alice Ridgway (CT) Lisa.
Understanding the Data on Preschool Child Find and Transition Annual Performance Report Indicator 12 February, 2016
Measuring EC Outcomes DEC Conference Presentation 2010 Cornelia Taylor, ECO Christina Kasprzak, ECO/NECTAC Lisa Backer, MN DOE 1.
Child Outcomes Measurement and Data Quality Abby Winer Schachner & Kathleen Hebbeler International Society on Early Intervention Conference Stockholm,
What is “Annual Determination?”
Part C State Performance Plan/Annual Performance Report:
OSEP Project Directors Meeting
Improving Data, Improving Outcomes Conference
Child Outcomes Data: A Critical Lever for Systems Change
Measuring Outcomes for Programs Serving Young Children with Disabilities Lynne Kahn and Christina Kasprzak ECO/NECTAC at FPG/UNC June 2,
G-CASE Fall Conference November 14, 2013 Savannah, Ga
Early Childhood Outcomes Data (Indicator C3 and B7)
Using outcomes data for program improvement
Understanding Indicator 6: Early Childhood Special Education Settings for Children Ages Birth-Five Hello and welcome to Understanding Indicator 6: Early.
Measuring Outcomes for Programs Serving Young Children with Disabilities Lynne Kahn and Christina Kasprzak ECO/NECTAC at FPG/UNC June 2,
Researchers as Partners with State Part C and Preschool Special Education Agencies in Collecting Data on Child Outcomes Kathy Hebbeler, ECO at SRI International.
Trends in Child Outcomes (C-3 / B-7) and Family Outcomes (C-4)
Understanding Indicator 6: Early Childhood Special Education Settings for Children Ages Birth-Five Hello and welcome to Understanding Indicator 6: Early.
Gathering Input for the Summary Statements
Understanding Indicator 6: Early Childhood Special Education Settings for Children Ages Birth-Five Hello and welcome to Understanding Indicator 6: Early.
Measuring EC Outcomes DEC Conference Presentation 2010
Refresher: Background on Federal and State Requirements
Measuring Child and Family Outcomes Conference August 2008
Early Childhood Outcomes Data (Indicator C3 and B7)
Presentation transcript:

Are your C4 data reflective of the families you serve? Joy Markowitz, Director Jean Dauphinee, TA Specialist Measuring Child and Family Outcomes Conference, August 28, 2008

2 Purpose of Collecting C4 data To figure out how to serve our families better specifically related to Indicator C4: Percent of families participating in Part C who report that early intervention services have helped the family: – A Know their rights; – B. Effectively communicate their children's needs; and – C. Help their children develop and learn.

Measuring Child and Family Outcomes Conference, August 28, Why do we care who responds to our surveys? We don’t want to invest time and money in improvement activities if data do not reflect the opinions and/or experiences of the families/children we serve. This is an issue if you collect data from all families (census) or a sample of families because not all families who receive a survey respond.

Measuring Child and Family Outcomes Conference, August 28, Population Population may be defined by the state and could be (for example): all Part C families all Part C families exiting in a given year, or all Part C families in program more than 1 year.

Measuring Child and Family Outcomes Conference, August 28, Other terms used Population Target group Target population Respondents Response Pool Respondent Pool Respondent Group

Measuring Child and Family Outcomes Conference, August 28, Nonresponse Bias Representativeness is a word we have been using. It’s not a statistical term. Correct term –nonresponse bias. Meaning: Are the families who did not respond different from the families that did?

Measuring Child and Family Outcomes Conference, August 28, What is a Sample? A sample is a subset of the population that you define. However, this sample must be derived using an approved sampling plan. OSEP collaborates with DAC to review and approve submitted sampling plans.

Measuring Child and Family Outcomes Conference, August 28, Questions about Response Rates What is high enough? How do you determine if your respondents are reflective of the population you defined? Whether using census or sampling, your return or response rate is unlikely to be as high as you want it to be; therefore, you must address nonresponse bias.

Measuring Child and Family Outcomes Conference, August 28, Example 1 Families served Black 30 White 50 AI/AN 20 Total100 Survey respondents Black 24 White 44 AI/AN 2 Total 70 Do these data reflect the population? Families served in (n=100):

Measuring Child and Family Outcomes Conference, August 28, Example 2 Families served English 75 Other language 25 Total 100 Survey respondents English65 Other language 5 Total70 Do these data reflect the population? Families exiting in (n=100):

Measuring Child and Family Outcomes Conference, August 28, Example 3 Families served by agencies Agency 1100 Agency 2100 Agency 3100 Agency 4100 Total400 Survey respondents Agency 1 49 Agency 2 63 Agency 3 66 Agency 4 21 Total 199 Do these data reflect the population? Families served in programs for at least one year (n=100):

Measuring Child and Family Outcomes Conference, August 28, State Data Collection Methods/Strategies Methods Census (n=41 states) Sample (n= 15 states) – oversampling populations that are known to be hard to reach. Analysis strategies Frequencies and percentages Weighting Sampling among respondents

Measuring Child and Family Outcomes Conference, August 28, Summary of C4 Findings (Based on analysis of 54 state APRs) 67% (n= 36 states) reported nonresponse bias, 26% (n=14 states) did not. Among the 67% who reported nonresponse bias, most reported by one variable (e.g. race/ethnicity, region, gender, or child’s age)

Measuring Child and Family Outcomes Conference, August 28, Summary of C4 Findings (Based on analysis of 54 state APRs) cont. Most states measured nonresponse bias by: Race/ethnicity or Race/ethnicity and other factors such as: a.length of time in program, b.region of the state, c.child’s age, or d.gender.

Measuring Child and Family Outcomes Conference, August 28, From SPP/APR Instruction Sheet States are allowed to use sampling when so indicated on the Part C Indicator Measurement Table. When sampling is used, a description of the sampling methodology outlining how the design will yield valid and reliable estimates must be submitted to OSEP. The description must describe the: (a) sampling procedures followed (e.g., random/stratified, forms validation); and (b) the similarity or differences of the sample to the population of children with disabilities in the early intervention program (e.g., how all aspects of the population such as disability category, race, age, gender, etc. will be represented).

Measuring Child and Family Outcomes Conference, August 28, From SPP/APR Instruction Sheet (cont.) The description must also include how the Lead Agency addresses any problems with: (1) response rates; (2) missing data; and (3) selection bias. Samples from EIS programs must be representative of each of the EIS programs sampled, considering such variables as eligibility definition (diagnosed condition or developmental delay), age, race, and gender.

Measuring Child and Family Outcomes Conference, August 28, From SPP/APR Instruction Sheet (cont.) In reporting on the performance of small EIS programs, the Lead Agency shall not report to the public or the Secretary any information on performance that would result in the disclosure of personally identifiable information about individual children or where the available data is insufficient to yield statistically reliable information, i.e., numbers are too small. Source: Part C State Performance Plan (SPP) and Annual Performance Report (APR) Instruction Sheet dated 10/19/2007.

Measuring Child and Family Outcomes Conference, August 28, Examples from States How have states tackled these issues? State representatives: Wendy Whipple, NV Sue Campbell and Rosanne Griff- Cabelli, DE Alice Ridgeway, CT

Measuring Child and Family Outcomes Conference, August 28, Questions ? What questions are you getting from OSEP that you need help to answer? What guidance do you need from OSEP? What are realistic expectations of states for reporting nonresponse bias?