Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Technology Programs Philip Shapira School of Public Policy Georgia Institute of Technology, Atlanta, USA Evvy Award:

Similar presentations


Presentation on theme: "Evaluating Technology Programs Philip Shapira School of Public Policy Georgia Institute of Technology, Atlanta, USA Evvy Award:"— Presentation transcript:

1 Evaluating Technology Programs Philip Shapira School of Public Policy Georgia Institute of Technology, Atlanta, USA Email: ps25@prism.gatech.edu Evvy Award: The US MEP Program - A System Model? Workshop on “Research Assessment: What’s Next,” Arlie House, May 18, 2001

2 Overview

3 Evaluation case The US Manufacturing Extension Partnership (MEP) zMEP program aims: y“Improve the technological capability, productivity, and competitiveness of small manufacturers.” y“Transform a larger percentage of the Nation’s small manufacturers into high performance enterprises.” zPolicy structure: federal-state collaboration zManagement: decentralized partnership - 70 MEP centers zServices: 25,000 firms assisted/year yAssessments 18%; projects 60%; training 22% zRevenues: 99-00 ~ $280m yFederal $98m (35%); state $101m (35%); private $81m (29%) 

4 MEP Program Model Development Outcomes Business Outcomes Intermediate Actions Centers Projects Companies

5 MEP Evaluation System NIST z Telephone survey of customers of projects based on center activity data reports z Panel reviews of centers and staff oversight z National Advisory Board review of program z Special studies Federal Oversight (e.g. GAO) State Evaluation s Independent Researchers & Consultants 3rd Party sponsors MEP Program

6 Complex Management Context for Evaluation Development Outcomes Business Outcomes Intermediate Actions Centers Projects Companies Center Reviews Needs Assessments Activity Reporting Customer Surveys Center Benchmarking Special Evaluation Studies Center Boards Center Plans NIST Plans GPRA goals National Advisory Board

7 30 MEP Evaluation Studies, 1994-99: Multiple Methods

8 30 MEP Evaluation Studies, 1994-99: Varied Performers MEP revenues 94-99: ~ $1.0 B. - $1.2 B. Evaluation expenditures: ~ $3m-$6m ?? =.25%-.50%

9 Utility of Evaluative Methods (with schematic ranking, based on GaMEP experience 94-00) Note : Ranking (schematic): 5 = extremely important; 3 = somewhat important; 1 = not important. Ranking weights are schematic, based on experience.

10 30 MEP Evaluation Studies, 1994-99: Summary of Key Findings zMore than 2/3 of customers act on program recommendations. zEnterprise staff time committed exceeds staff time provided (leverage) zMore firms report impacts on knowledge and skills than are able to report hard dollar impacts zNetworked companies using multiple public and private resources have higher value-added than more isolated firms (raises issues of attribution) zRobust studies show skewed results - important impacts for a few customers, moderate impacts for most zService mix and duration matters in generating impacts zCase studies show that management and strategic change in companies is often a factor in high impact projects zIn comparative studies, there is evidence of improvements in productivity, but these improvements are modest. Impact on jobs is mixed. zCost-benefit analyses show moderate to strongly positive paybacks

11 30 MEP Evaluation Studies, 1994-99: Assessment Advantages zMultiple methods and perspectives zEncourages methodological innovation zDiscursive - findings promote exchange, learning zCan signpost improved practice Challenges z Fragmented, many results, some contradict z Program justification still prime z Variations in quality; reliable measurement oftenl a problem; z Dissemination z Different evaluation methods are received and valued differently by particular stakeholders z Agency interest in sponsorship may be waning - fear of “non-standard” results

12 Insights from the MEP case (1) zTechnology program evaluation should not focus exclusively on narrow economic impacts; but also assess knowledge transfer, strategic change & stimulate learning and improvement yMultiple evaluation methods and performers are key to achieving this goal yStrong internal dynamic to promote assessment, benchmarking, discursive evaluation zIllustrates a “networked evaluation partnership” yBalancing of federal and state perspectives, with federal role adding resources and consistency to evaluation system yLocal experimentation is possible and can be assessed yEmergence of an evaluation cadre and culture - development of methodologies yHighly discursive: signposts improved practice yEvaluation becomes a forum to negotiate program direction

13 Insights from the MEP case (2) zAlso illustrates threats yVariations in robustness, effectiveness, awareness of multiple evaluation studies yOversight “demand” for complex evaluation system is weakly expressed - GPRA is a “low” hurdle to satisfy yAgency push for “results” and performance measurement (rather than evaluation) - fear of non-standard results yVunerability to fluctuations in agency will to support independent outside evaluators yTranslating evaluation fundings into implementable program change is a challenge, especially as program matures. yThreats to learning mode? Maturization; bureacratization; standard result expectations; political support.


Download ppt "Evaluating Technology Programs Philip Shapira School of Public Policy Georgia Institute of Technology, Atlanta, USA Evvy Award:"

Similar presentations


Ads by Google