Download presentation
Presentation is loading. Please wait.
Published byPatrick Hensley Modified over 6 years ago
1
Measurable, Meaningful, and Motivational Metrics
Miriam L. Goldberg AESP WI Chapter November 5, 2008
2
Comments on Peters-McRae
Provocative, thoughtful, provides useful insights on how we assess markets The case against “net” is overstated “Current FR estimation approaches” is a straw man, weaker than most actual practice There are biases both directions in end-user self-report methods Greater interest in environment & energy efficiency means actual free ridership is likely to be up Not just apparent “social desirability” Good design doesn’t aim for minimum FR It aims for high cost-effectiveness
3
Some principles Using public money requires evidence that we’ve accomplished something that wouldn’t have happened without it. That is, we need net, not just gross savings What gets accomplished depends on what gets rewarded. What gets rewarded depends on what gets measured, and how. Measurement matters
4
Options Let what we can measure rigorously drive policy goals & program design Set policy we want regardless of measurement capability Work from both these extremes to an acceptable middle ground Need to balance the ideal and the practical for both policy and measurement
5
Why Measure Attribution?
Cost-Effectiveness assessment Determining net program value Not just go/no-go, but how does this use of public money compare with other possible uses Portfolio design and resource allocation Program design/improvement Tracking against (net) goals Determine incentive payments Increase accomplishments Requires net Doesn’t require net
6
Problem: Social research is imperfect
Response: We know more than nothing Programs where end-user self-reports can’t give useful information are only a part of today’s portfolios Regions new to DSM start with basic designs Many programs will never yield large market effects For programs that affect suppliers & multiple market dimensions over time, we have many other tools Imperfect measurement can still be useful for tracking improvement over time & guiding decisions
7
Problem: Asking operating programs to limit FR puts sales teams in a strange bind. Sell only to customers who don’t want your product much Response: Design programs to move markets to higher efficiency adoption, based on good market info Measure and reward program accomplishment based on gross savings Leave program operators free to pursue customers within the program design Cost-effectiveness, program re-design, portfolio re-allocation still require attribution assessment
8
Problem: Emphasis on short-term attribution discourages investment in long-term market changes and broader efficiency improvements Response: Understanding program effectiveness requires both short- & long-term perspectives If today’s free riders are yesterday’s spillover That doesn’t make today’s program still worth running Mechanisms for quantifying and rewarding delayed effects ideally should be built into original program Creating lasting change in markets is difficult
9
Dynamic Baseline A Framework for Planning and Assessing Publicly Funded Energy Efficiency, C 7, PG&E 2001
10
A Modest Proposal: Moving Beyond Widget Tracking
Set regional energy use targets Possibly functions of population, production, weather, economic conditions Design programs to lower regional energy use Measure and reward success based on regional indicators Use market studies to figure out what programs are working and how to fine-tune design parameters Move the attribution argument to understanding how to make things work better, not fighting for payments
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.