Empirical Data on Settlement of Weather Sensitive Loads Josh Bode, M.P.P. ERCOT Demand Side Working Group Austin, TX September 20, 2012.

Slides:



Advertisements
Similar presentations
In the Post 06 Environment November 9, 2006 Jim Eber Demand Response.
Advertisements

Load Impact Estimation for Demand Response Resources Nicole Hopper, Evaluation Manager July 14, 2009 National Town Meeting on Demand Response and Smart.
BG&E’s PeakRewards SM Demand Response Program Successful Approaches for Engaging Customers August 20, 2014.
Time-of-Use and Critical Peak Pricing
2013 Statewide BIP Load Impact Evaluation Candice Churchwell DRMEC Spring 2014 Load Impacts Evaluation Workshop San Francisco, California May 7, 2014.
Automated Demand Response Pilot 2005/2004 Load Impact Results and Recommendations Final Report © 2005 Rocky Mountain Institute (RMI) Research & Consulting.
DISPUTES & INVESTIGATIONS ECONOMICS FINANCIAL ADVISORY MANAGEMENT CONSULTING ©2014 Navigant Consulting, Inc. May 7, 2014 Navigant Reference: Impact.
2013 SDG&E Summer Saver Load Impact Evaluation Dr. Stephen George DRMEC Spring 2014 Load Impacts Evaluation Workshop San Francisco, California May 7, 2014.
Energy Efficiency and Demand Response: Separate Efforts or Two Ends of a Continuum? A Presentation to: Association of Edison Illuminating Companies Reno,
California Energy Commission Resource Adequacy Demand Forecast Coincidence Adjustments R Resource Adequacy Workshop January.
Class 20: Chapter 12S: Tools Class Agenda –Answer questions about the exam News of Note –Elections Results—Time to come together –Giants prove that nice.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 9: Hypothesis Tests for Means: One Sample.
Load Forecasting Eugene Feinberg Applied Math & Statistics Stony Brook University NSF workshop, November 3-4, 2003.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 5 Making Systematic Observations.
Monitoring and Pollutant Load Estimation. Load = the mass or weight of pollutant that passes a cross-section of the river in a specific amount of time.
8-1 Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft.
Copyright ©2011 Pearson Education 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft Excel 6 th Global Edition.
The Calibration Process
Measuring Dollar Savings from Software Process Improvement with COCOMO II Betsy Clark Software Metrics Inc. October 25, 2001 Acknowledgment: This presentation.
1 Seventh Lecture Error Analysis Instrumentation and Product Testing.
Overview – Non-coincident Peak Demand
Applied Business Forecasting and Planning
Copyright 2012 John Wiley & Sons, Inc. Chapter 7 Budgeting: Estimating Costs and Risks.
1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,
2001 South First Street Champaign, Illinois (217) Davis Power Consultants Strategic Location of Renewable Generation Based on Grid Reliability.
Presentation Overview
Determining Sample Size
Measurement, Verification, and Forecasting Protocols for Demand Response Resources: Chuck Goldman Lawrence Berkeley National Laboratory.
ERCOT 2003 UFE ANALYSIS By William Boswell & Carl Raish AEIC Load Research Conference July 13, 2005.
Hydrologic Modeling: Verification, Validation, Calibration, and Sensitivity Analysis Fritz R. Fiedler, P.E., Ph.D.
Baseline Analysis CBP, AMP, and DBP Steve Braithwait, Dan Hansen, and Dave Armstrong Christensen Associates Energy Consulting DRMEC Spring Workshop May.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Basic Business Statistics 11 th Edition.
Confidence Interval Estimation
Chap 8-1 Copyright ©2013 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Business Statistics: A First Course.
Part 1: Basic Principle of Measurements
UFE 2003 Analysis June 1, UFE 2003 ANALYSIS Compiled by the Load Profiling Group ERCOT Energy Analysis & Aggregation June 1, 2005.
2013 California Statewide Critical Peak Pricing Evaluation Josh L. Bode Candice A. Churchwell DRMEC Spring 2014 Load Impacts Evaluation Workshop San Francisco,
CPUC Workshop on Best Practices & Lessons Learned in Time Variant Pricing TVP Pilot Design and Load Impact M&V Dr. Stephen George Senior Vice President.
Gile Sampling1 Sampling. Fundamental principles. Daniel Gile
Forecasting Chapter 9. Copyright © 2013 Pearson Education, Inc. publishing as Prentice Hall Define Forecast.
CROSS-VALIDATION AND MODEL SELECTION Many Slides are from: Dr. Thomas Jensen -Expedia.com and Prof. Olga Veksler - CS Learning and Computer Vision.
Weather Sensitive ERS Training Presenter: Carl Raish Weather Sensitive ERS Training Workshop April 5, 2013.
Settlement Accuracy Analysis Prepared by ERCOT Load Profiling.
NPRR 571 ERS Weather Sensitive Loads Requirements Carl Raish, ERCOT QSE Managers Working Group November 5, 2013.
Power Association of Northern California Maintaining Grid Reliability In An Uncertain Era May 16, 2011 PG&E Conference Center Jim Mcintosh Director, Executive.
Chap 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers Using Microsoft Excel 7 th Edition, Global Edition Copyright ©2014 Pearson Education.
Module 1: Measurements & Error Analysis Measurement usually takes one of the following forms especially in industries: Physical dimension of an object.
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
Measurements Measurements and errors : - Here, the goal is to have some understanding of the operation and behavior of electrical test instruments. Also,
1 Impact of Sample Estimate Rounding on Accuracy ERCOT Load Profiling Department May 22, 2007.
Uncertainty2 Types of Uncertainties Random Uncertainties: result from the randomness of measuring instruments. They can be dealt with by making repeated.
Document number Anticipated Impacts for FRRS Pilot Program ERCOT TAC Meeting September 7, 2012.
Oncor Transmission Service Provider Kenneth A. Donohoo Director – System Planning, Distribution and Transmission Oncor Electric Delivery Co LLC
Building Blocks for Premise Level Load Reduction Estimates ERCOT Loads in SCED v2 Subgroup July 21, 2014.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics: A First Course 5 th Edition.
2015 California Statewide Critical Peak Pricing Evaluation DRMEC Spring 2016 Load Impact Evaluation Workshop San Francisco, California May, 2016 Prepared.
2013 Load Impact Evaluation Capacity Bidding Program (CBP) Steve Braithwait, Dan Hansen, and Dave Armstrong Christensen Associates Energy Consulting DRMEC.
DRMEC Spring 2016 Load Impacts Evaluation Workshop San Francisco, California May 10, SDG&E Summer Saver Load Impact Evaluation.
MAPE Publication Neil McAndrews For Bob Ryan of Deutsche Bank.
Copyright 2015 John Wiley & Sons, Inc. Chapter 7 Budgeting: Estimating Costs and Risks.
2015 SDG&E PTR/SCTD Evaluation DRMEC Spring 2016 Load Impact Workshop George Jiang May 11 th, 2016 Customer Category Mean Active Participants Mean Reference.
2013 Load Impact Evaluation of Southern California Edison’s Peak Time Rebate Program Josh Schellenberg DRMEC Spring 2014 Load Impact Evaluation Workshop.
Principal Load Profiling and Modeling
Davion Hill, DNV GL Elizabeth Endler, Shell
Emergency Response Service Baselines
Demand Response in the 7th Power Plan
Supporting an omnichannel strategy Measuring performance
Resource Adequacy Demand Forecast Coincidence Adjustments
Measurements & Error Analysis
Jim Mcintosh Director, Executive Operations Advisor California ISO
Presentation transcript:

Empirical Data on Settlement of Weather Sensitive Loads Josh Bode, M.P.P. ERCOT Demand Side Working Group Austin, TX September 20, 2012

Presentation Overview  Why is settlement of weather sensitive loads an issue?  Testing accuracy of settlement methods  Empirical results  Using smart meter data and control groups for evaluation Page 1

Baselines are a tool to estimate demand reductions  Measuring demand reductions is an entirely different task than measuring power production  Power production is metered and thus is measured directly.  Demand reductions cannot be metered. They must be estimated by indirect approaches.  In principle, the reduction is simply the difference between electricity use with and without the load curtailment  However, it is not possible to directly observe or meter what electricity use would have been in the absence of the curtailment – the counterfactual.  Instead, the counterfactual must be estimated. Page 2

The accuracy of baseline estimates for large C&I customers has been studied multiple times Page 3 KEMA Baseline Analysis for CEC WG2 Baseline accuracy analysis (Quantum) WG2 Report Baseline Accuracy Analysis (Quantum) LBNL Study (proxy events) Ontario Power Authority Study (FSC) California Aggregator and DBP Evaluations (CAEC) Highly volatile load customer study (CAEC) ISO-NE Baseline Study (KEMA) PJM Baseline Study (KEMA) California Aggregator Programs (FSC)

Settlement of reductions from weather sensitive loads like AC has been studied less  Weather sensitive loads have demonstrated the ability to support multiple grid functions  4.8 million residential AC units and more than 500,000 water heaters in the U.S. have load control devices  Recent technological innovations enable aggregation and real time visibility of small scale loads  Many load control devices now include over and under frequency relays, providing an automated fail-safe mechanism that is synchronized with the grid Page 4

Visibility of loads has been tested  Because of the sheer number of AC units, it is not practical to monitor all data points  Real time monitoring of AC units is expensive, even for a sample Page 5 Feeder Load AC end use sample Estimated loads for population

Load control programs have shown the ability to provide contingency reserves Page 6 Fast start up time Fast ramp up to full resource capability  Highly granular dispatch is possible – it is possible to dispatch all or some of the resources in a specific area  No one has tested the built-in under frequency relays which provide a failsafe mechanism, do not rely on central dispatch and should respond even faster  Customers that were curtailed over the summer report same comfort and frequency of events as customers that were curtailed once

Water heaters have demonstrated the ability to follow regulation signals Page 7  Graph is from PJM pilot (Joe Callis)  The initial test was for a unit and has since expanded to wider scale testing

However, these loads are highly variable Average Hourly Residential AC Loads by Temperature Page 8 The fact that some loads are very weather sensitive does not mean they are unpredictable

Testing the accuracy of settlement methods Page 9

We tested 11 different settlement alternatives for baselines for short curtailments Type of Estimator MethodNo.Calculation Data Source Individual AC Aggregate AC Feeder House Data Within- subject estimators Day-matching baseline 110-in-10 with a 20% in-day adjustment capXXXX 210-in-10 without an in-day adjustment capXXXX 3Top 3 in 10 without an in-day adjustment capXXXX Weather- matching baseline 4 Profile selected based on daily maximum temperature without an in-day adjustment cap XXXX Regression 5Treatment variables and no day or hourly lags or leadsXXXX 6Treatment variables with a day lagXXXX 7Treatment variables with hourly lags and leadsXXXX 8No treatment variables but use of hourly lags and leadsXXXX Between- subject estimators Random assignment of load control operations 9Comparison of MeansXX 10Difference-in-differencesXX Pre-calculated load reduction estimate tables 11 Multiply the number of AC units in each geographic location by the corresponding estimate of demand reductions per AC unit for the corresponding area, hour of day, and temperature bin. XX Page 10

To test accuracy, one needs to know the correct values Page 11 Because the demand reductions values are artificially introduced and known, we can determine the accuracy of each baseline alternative

For the tables and approaches that relied on control groups, we used a split sample approach 1.Randomly split data into two groups 2.Simulate the reduction in one group 3.Use the second group to produce the counterfactual or baseline 4.Calculate impacts and store 5.Repeat 100 times Page 12

There are two key issues in assessing accuracy – bias and goodness of fit Type of MetricMetricDescription Bias Mean Percentage Error (MPE) The mean percentage error (MPE) indicates the percentage by which the measurement, on average, tends to over or underestimate the true demand reduction. Goodness-of-Fit Mean Absolute Percentage Error (MAPE) The mean absolute percentage error (MAPE) is a measure of the relative magnitude of errors across event days, regardless of positive or negative direction. It is normalized allowing comparison of results across different data sources. CV(RMSE) This metric is similar to MAPE except that it penalizes large errors more than small ones Page 13

Limitations of analysis  The analysis of settlement approaches focuses on ancillary services  Short term reductions to stabilize the grid  Usually triggered by generation or transmission outages and sometimes unexpected changes in wind or loads  Not always in the hottest hours  The errors in the demand reduction estimates depend on the magnitude of the reduction signal  The estimates were based on 50% standard cycling  Air conditioner use is lower in California than in Texas  The direction of the findings likely hold up but the magnitude of the errors will be different Page 14

How does the magnitude of the demand reduction signal affects accuracy? Example MetricCustomer ACustomer B Baseline Estimate294 kW True Reference load300 kW Load with DR270 kW225 kW Demand reduction estimate24 kW ( ) 69 kW ( ) True demand reduction30 kW (10%)75 kW (25%) % Error-20%-8% Page 15 The customers are nearly identical, but the estimation error differ because of they reduce ad different amount of demand

Empirical Results Page 16

What is the value of more complex approaches?  In each case, we compare results to the most simple approach – the pre-calculated load reduction tables  We present the results for within-subject and control group approaches separately  We present the bias and goodness of fit metrics separately  All graphs used the same scale Page 17

Matching and regression approaches with individual AC data Page 18

Matching and regression approaches with aggregated AC data Page 19

Matching and regression approaches with whole house data Page 20

Matching and regression approaches with feeder data Page 21

why are feeder results so inaccurate? Example  Feeder characteristics  2,672 accounts on feeder  266 AC load control accounts (10% of feeder)  292 AC controllable AC units  Likely includes commercial  Penetration higher than 90% of feeders  Event day characteristics  August 24, 2010, max temp 103 F  Simulated event period 12:00-14:00 AC load per unit 0.63 kW Load Impact 35% Controllable AC load kW (0.63 kW per unit x 292 AC units) Feeder Impact 64 kW Actual load without DR 7772 kW Simulated load with DR 7708 kW  Percent impact on feeder 0.83%!!! Page 22

Control group methods with AC end use data (500 control group, 500 treatment) Page 23

Control group methods with whole house data (2,000 control group, 2,000 treatment) Page 24

Implications of study  Don’t rely on feeder data for settlement  Day matching baselines are the least accurate approach with weather sensitive loads  Day-matching baselines are not well suited for measuring demand reductions from highly weather sensitive loads  More granular meters do not necessarily increase the accuracy of demand reduction measurement because measuring demand reduction is fundamentally different  Complex methods provide limited improvement  Pre-calculated load reduction tables can produce results that on average are correct, but may err for individual days, especially if they are cooler  Methods with control groups and large sample sizes perform best Page 25

Using Smart Meter Data and Control Groups for Evaluation Page 26

Impact estimate tables are not developed in a vacuum  They should be based on a history of results  Ideally this includes systematic testing of load control devices under different condition and with different operation and control strategies  The estimates from the operations underlying the tables need to be unbiased and, ideally, precise  The more data points, the better the results  One may need to account for changes in the customer mix, if relevant Page 27

With large samples and random assignment, estimation error is virtually eliminated  Actual example for 2011 PG&E SmartAC evaluation  Wide availability of smart meter data and individually addressable devices are a pre-requisite Page 28  Randomly assign population into 10 groups  For each test event, one group was activated and the other 9 were held as a control groups  For a few events, we tested different operation strategies side-by- side

It enables side by side testing of different operation strategies Page 29

It also enables side by side testing of different control strategies Page 30

By using control groups and short events, once can get a substantial history of results  In the PG&E study, each customer only experienced one event, but we obtained results from 7 days, including 3 with side by side testing  It is reasonable to call up to 10 events per customers, especially if the curtailments are short (e.g. 1-2 hrs)  This can yield results under 100 different curtailments to inform the impact estimate tables Page 31

Page 32 For any questions, feel free to contact Josh Bode, M.P.P. Freeman, Sullivan & Co. 101 Montgomery Street 15 th Floor, San Francisco, CA