Evaluating Capabilities: Building consensus around capability measures and indicators Jason Pollard Think:Learn:Do
Background Integrative Development Model (Kia-Keating, 2009) Young Foundation Report Adoption by commissioners Challenge for service deliverers Opportunity for the sector
The challenges of evaluation Bringing capabilities into a single tool Developing a set of (valid) indictors Scoring a capability measure Getting the right data out of the service
Meeting the Challenge Local evaluation network Feasibility study – –Asked can operationally useful information be collected to support operational decision making
Evaluation Network Recruited six youth providers through VAL Jointly developed a set of indicators Pilot evaluation and pre-test of data collection tool Full evaluation to ‘road test’ tools
Developing the Indicator Set Definitions MeasuresIndicators
Lewisham – Capability Measures 1.Being Creative 2.Communication 3.Relationships and Leadership 4.Resilience 5.Confidence and Agency 6.Managing Feelings 7.Planning and Problem Solving 8.Strengthening Citizenship 9.Making the most of London
Delphi – Agreeing Definitions Example – Being Creative
Delphi – Agreeing Measures How to measure Being Creative
Delphi – Agreeing Indicators What would we expect to see?
The Indicator Set 27 indicators Equally split over 9 outcomes What we would expect to see Conceptually linked to outcomes
Developing the Data Collection Tool A ‘composite’ tool – brings together several capability scales Valid and reliable (in pre-test) Soft Data – questions relating to the development of the young person and their engagement with their project Hard Data – age-specific questions directly relating to the capabilities
Next Steps Pilot - Scale-up: 10+ organisations to collect common data Explore Thematic indicators (talent; financial capability, etc) Establish evaluation communities of interest (Co-ops; CiCs)
End