Director of Forecasting

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

PRESENTS: FORECASTING FOR OPERATIONS AND DESIGN February 16 th 2011 – Aberdeen.
Tim Smyth and Jamie Shutler Assessment of analysis and forecast skill Assessment using satellite data.
2012: Hurricane Sandy 125 dead, 60+ billion dollars damage.
Difficulties Integrating Wind Generation Into Urban Energy Load Russell Bigley Shane Motley Keith Parks.
Chapter 12 - Forecasting Forecasting is important in the business decision-making process in which a current choice or decision has future implications:
Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Operational Quality Control in Helsinki Testbed Mesoscale Atmospheric Network Workshop University of Helsinki, 13 February 2007 Hannu Lahtela & Heikki.
1 Integrating Wind into the Transmission Grid Michael C Brower, PhD AWS Truewind LLC Albany, New York
Supplemental Topic Weather Analysis and Forecasting.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
A R EVIEW OF L ARGE -S CALE R ENEWABLE E LECTRICITY I NTEGRATION S TUDIES Paulina Jaramillo, Carnegie Mellon University And Paul Hines, University of Vermont.
Part II – TIME SERIES ANALYSIS C2 Simple Time Series Methods & Moving Averages © Angel A. Juan & Carles Serrat - UPC 2007/2008.
Bill Blevins Sep. 24, 2014 PV Forecasting RFP. 2 Projected Installed Capacity of PV in ERCOT.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
1 BA 555 Practical Business Analysis Review of Statistics Confidence Interval Estimation Hypothesis Testing Linear Regression Analysis Introduction Case.
2011 Long-Term Load Forecast Review ERCOT Calvin Opheim June 17, 2011.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Business Forecasting Chapter 5 Forecasting with Smoothing Techniques.
The impacts of hourly variations of large scale wind power production in the Nordic countries on the system regulation needs Hannele Holttinen.
World Renewable Energy Forum May 15-17, 2012 Dr. James Hall.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Power Generation from Renewable Energy Sources
6 am 11 am 5 pm Fig. 5: Population density estimates using the aggregated Markov chains. Colour scale represents people per km. Population Activity Estimation.
SRNWP workshop - Bologne Short range ensemble forecasting at Météo-France status and plans J. Nicolau, Météo-France.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
Application of a Multi-Scheme Ensemble Prediction System for Wind Power Forecasting in Ireland.
Hydrologic Modeling: Verification, Validation, Calibration, and Sensitivity Analysis Fritz R. Fiedler, P.E., Ph.D.
A Statistical Comparison of Weather Stations in Carberry, Manitoba, Canada.
NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy operated by the Alliance for Sustainable.
The Science of Prediction Location Intelligence Conference April 4, 2006 How Next Generation Traffic Services Will Impact Business Dr. Oliver Downs, Chief.
Wolf-Gerrit Früh Christina Skittides With support from SgurrEnergy Preliminary assessment of wind climate fluctuations and use of Dynamical Systems Theory.
ECE 7800: Renewable Energy Systems
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
UFE 2003 Analysis June 1, UFE 2003 ANALYSIS Compiled by the Load Profiling Group ERCOT Energy Analysis & Aggregation June 1, 2005.
© 2007 AWS Truewind, LLC Optimization of Wind Power Production Forecast Performance During Critical Periods for Grid Management John W Zack, Principal.
AWST’s Implementation of Wind Power Production Forecasting for ERCOT
AWS Truewind Methodology Timeline of AWS Truewind participation Key points Wind resource modeling Estimation of plant output Validation and adjustment.
April 15, 2003 UFE 2002 ANALYSIS. April 15, 2003 LOAD AND UFE – ERCOT PEAK 2002 This is a graphic depiction of load and UFE on the ERCOT Peak Day for.
Energie braucht Impulse Immediate Horizontal Wind Energy Exchange between TSOs in Germany since September 2004 Practical Experiences EWEC 2006, 28 February.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
Use, duplication or disclosure of this document or any of the information contained herein is subject to the restrictions on the title page of this document.
Identification of side-door/back-door cold fronts for fire weather forecasting applications Joseph J. Charney USDA Forest Service Northern Research Station,
Objective Data  The outlined square marks the area of the study arranged in most cases in a coarse 24X24 grid.  Data from the NASA Langley Research Center.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
5-1 ANSYS, Inc. Proprietary © 2009 ANSYS, Inc. All rights reserved. May 28, 2009 Inventory # Chapter 5 Six Sigma.
2016 Long-Term Load Forecast
Analysis of ERCOT Regulation Service Deployments during 2011 David Maggio Market Enhancement Task Force Meeting 3/29/
Title_Sli de Quality Controlling Wind Power Data for Data Mining Applications Gerry Wiener Research Applications Laboratory Software Engineering Assembly,
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
AOSC 200 Lesson 21. WEATHER FORECASTING FOLKLORE –Red sky at night, shepherd’s delight, –Red sky in morning, shepherd’s warning –When spiders’ webs in.
Kris Shrestha James Belanger Judith Curry Jake Mittelman Phillippe Beaucage Jeff Freedman John Zack Medium Range Wind Power Forecasts for Texas.
11th EMS & 10th ECAM Berlin, Deutschland The influence of the new ECMWF Ensemble Prediction System resolution on wind power forecast accuracy and uncertainty.
Alan F. Hamlet Andy Wood Dennis P. Lettenmaier JISAO Center for Science in the Earth System Climate Impacts Group and the Department.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Description of the IRI Experimental Seasonal Typhoon Activity Forecasts Suzana J. Camargo, Anthony G. Barnston and Stephen E.Zebiak.
03/06/2008 TAC CREZ Transmission Optimization (CTO) Study Update Dan Woodfin Director, System Planning.
Figures from “The ECMWF Ensemble Prediction System”
of Temperature in the San Francisco Bay Area
Meteorology and Weather Forecasting
Mechanical Engineering Haldia Institute of Technology
Course Evaluation Now online You should have gotten an with link.
Overview of Deterministic Computer Models
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
Post Processing.
Modeling the Atmos.-Ocean System
Storm Surge Modeling and Forecasting
P2.5 Sensitivity of Surface Air Temperature Analyses to Background and Observation Errors Daniel Tyndall and John Horel Department.
Presentation transcript:

Director of Forecasting ERCOT Workshop Austin, TX June 26, 2009 Overview of the Current Status and Future Prospects of Wind Power Production Forecasting for the ERCOT System John W. Zack Director of Forecasting AWS Truewind LLC Albany, New York jzack@awstruewind.com

Overview How Forecasts are Produced WGR Data Issues Overview of forecasting technology, issues and products ERCOT-specific issues and forecast products WGR Data Issues Forecast Performance Recent performance statistics Comparison to other regions Analysis of cases with much worse than average performance Road to Improved Forecasts

How Forecasts are Produced Forecasting Tools Input Data Configuration for ERCOT ERCOT Forecast Products

Major Components of State-of-the-Art Forecast Systems Input Data, Forecast Model Components and Data Flow for a State-of-the-Art Forecast System Combination of physics- based (NWP) and statistical models Diverse set of input data with widely varying characteristics Importance of specific models and data types vary with look-ahead period Forecast providers vary significantly in how they use forecast models and input data © 2009 AWS Truewind, LLC

(also known as Numerical Weather Prediction (NWP) Models) Physics-based Models (also known as Numerical Weather Prediction (NWP) Models) Differential equations for basic physical principles are solved on a 3-D grid Must specify initial values of all variables for each grid cell Simulates the evolution of all of the basic atmospheric variables over a 3-D volume Some forecast providers rely solely on government-run models; others run their own model(s) Roles of Provider-run NWP Models Optimize model configuration for the forecasting of near-surface winds Use higher vertical or horizontal resolution over area of interest Execute simulations more frequently Incorporate data not used by government-run models Execute ensembles customized for near-surface wind forecasting © 2009 AWS Truewind, LLC

Roles of Statistical Models Empirical equations are derived from historical predictor and predictand data (“training sample”) Current predictor data and empirical equations are then used to make forecasts Many different model-generating methods available (linear regression, neural networks etc.) MOS Roles of Statistical Models Account for processes on a scale smaller than NWP grid cells (downscaling) Correct systematic-errors in the NWP forecasts Incorporate additional (onsite and offsite) observational data received after the initialization of most recent NWP model runs not effectively included in NWP simulations Combine met forecasts and power data into power predictions © 2009 AWS Truewind, LLC

An Important Consideration: Statistical Model Optimization Criterion Statistical models are optimized to a specified criterion (either explicitly or by default) Result: prediction errors have different statistical properties depending on the criterion Sensitivity to forecast error is dependent on the user’s application Forecast users and providers should communicate to determine and implement best criterion I = input variables W = weights (from training sample) O = output variables (the forecast) T = target (observed variables) e = forecast error Performance criterion (PC) and an optimizing algorithm needed to obtain W’s in the model A typical PC is mean square error (MSE) - but is it a good one?

Roles of Plant Output Models Relationship of met variables to power production for a specific wind plant Many possible formulations implicit or explicit statistical or physics-based single or multi-parameter Roles of Plant Output Models Facility-scale variations in wind (among turbine sites) Turbine layout effects (e.g. wake effects) Operational factors (availability, turbine performance etc) © 2009 AWS Truewind, LLC

Forecast Ensembles Uncertainty present in any forecast method due to Input data Model type Model configuration Approach: perturb input data and model parameters within their range of uncertainty and produce a set of forecasts (I.e. an ensemble) Roles of Ensembles Composite of ensemble members is often the best performing forecast Ensemble spread provides case-specific measure of forecast uncertainty Can be used to estimate structure of forecast probability distribution © 2009 AWS Truewind, LLC

Power Forecast Uncertainty Two primary sources of uncertainty Position on power curve Predictability of weather regime Useful to distinguish between sources Estimate weather-related uncertainty from spread of forecast ensemble Estimate power curve position uncertainty by developing statistics over all weather regimes A real facility-scale power curve

How the Forecasting Problem Changes by Time Scale Minutes Ahead Large eddys, turbulent mixing transitions Rapid and erratic evolution; very short lifetimes Mostly not observed by current sensor network Forecasting tools: Autoregressive trends Very difficult to beat a persistence forecast Need: Very hi-res 3-D data from remote sensing Hours Ahead Sea breezes, mountain-valley winds, thunderstorms Rapidly changing, short lifetimes Current sensors detect existence but not structure Tools: Mix of autoregressive with offsite data and NWP Outperforms persistence by a modest amount Need: Hi-res 3-D data from remote sensing Days Ahead “Lows and Highs”, frontal systems Slowly evolving, long lifetimes Well observed with current sensor network Tools: NWP with statistical adjustments Much better than a persistence or climatology forecast Need: More data from data sparse areas (e.g. oceans) © 2009 AWS Truewind, LLC

Forecast Products Deterministic Predictions Probabilistic Predictions MW production for a specific time interval (e.g. hourly) Typically the value that minimizes a performance metric (not necessarily the most probable value) Probabilistic Predictions Confidence bands Probability of Exceedance (POE) values Event Forecasts Probability of defined events in specific time windows Most likely (?) values of event parameters (amplitude, duration etc.) Example: large up or down ramps Situational Awareness Identification of significant weather regimes (alerts) Geographic displays of wind & weather patterns to enable event tracking © 2009 AWS Truewind, LLC

ERCOT Forecast System Current Planned 9 NWP model runs every 6 hrs Single NWP model run every 6 hrs No Model Output Statistics (MOS) WGR output model: mixed approach Some statistical using WGR data Some specified power curve Planned 9 NWP model runs every 6 hrs 1 NWP model run every hr (RUC) Model Output Statistics (MOS) Statistical optimized ensemble Statistical power curve: all WGRs

ERCOT Forecast Products Hourly updated 1 to 48 hr ahead forecasts in hourly increments STWPF: ~ most likely value WGRPP: 80% probability of exceedance value For each WGR and the aggregate of all WGRs Delivery Time By 15 minutes past each hour Delivery Vehicles xml files via web services to ERCOT csv files via email to each QSE for each WGR for which they schedule csv files and graphical displays on web page for ERCOT access only

Overview of Data Quality Issues Examples Data Issues Overview of Data Quality Issues Examples

Overview of WGR Data Issues Uses of WGR Data Statistically adjust NWP output data Define wind-power relationship Identify recent trends for very short-term (0-4 hrs) forecasts Impact of Data Quality Issues Prevent forecast performance from reaching its potential Degrade forecast performance Types of Issues Curtailment Turbine Availability Missing data Erroneous or locked data Unrepresentative met data Met Data Status # of WGRs Useable Data 35 Very high degree of scatter 12 High degree of scatter 11 Data appears not to be in mph 8 No data or < 10 data points 2 Less than 25% of data received Total 70

Data Quality Example #1: Useable Data The Good Data available through most of the power curve Well-defined wind speed vs power production relationship Modest amount of scatter Only a few outlier points Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

Data Quality Example #2: Useable But Limited Data Generally good data but limited amounts of data in the upper range of the power curve Light winds Curtailment during high wind periods Difficult to obtain an accurate wind speed vs power relationship for higher wind speeds The Not as Good Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

Data Quality Example #3: Questionable scaling of met data Wind speeds do not appear to be in mph - perhaps m/s but not clear Problem: scaling method is not known and thus data is not useable In addition to the scaling issues this example has a moderately high amount of scatter The Bad Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

Data Quality Example #4: High Amount of Scatter A high amount of scatter in the wind speed vs power production relationship is evident Data is not useable since its not clear which data has issues Possible causes Turbine availability issues? Erroneous or unrepresentative anemometer data? Another Type of Bad Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

Data Quality Example #5: Extremely High Amount of Scatter Enormous amount of scatter in the wind speed vs power production relationship Not useable Difficult to define a WGR-scale power curve Possible causes Turbine availability issues? Erroneous or unrepresentative anemometer data? The Ugly Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

Issues ERCOT Results: Wind Speed and Power Case Examples Forecast Performance Issues ERCOT Results: Wind Speed and Power Case Examples

Amount and Diversity of Regional Aggregation Impacts Apparent Forecast Performance Example: Alberta Wind Forecasting Pilot Project 1 year: May 2007-April 2008 3 forecast providers 12 wind farms (7 existing + 5 future) divided into 4 geographic regions of 3 farms each Hourly 1 to 48 hrs ahead forecasts for farms, regions and system-wide production Regional day-ahead forecast RMSE was 15-20% lower than for the farms System-wide day-ahead forecast RMSE was 40-45% lower than for the individual farms © 2009 AWS Truewind, LLC

Impact of Aggregation on Performance Comparisons: Misconceptions From the Visit Report: “SIPREOLICO provides detailed hourly forecasts up to 48 hours updated every 15 minutes. The accuracy of the forecast is phenomenal: The forecast root mean square error for the 48-hour- ahead forecast is below 5.5% of the installed wind generation capacity. “ Lack of a consideration of the impact of size and diversity of the generation resource leads to misconceptions about relative forecast performance Example: US visitors to the Spanish TSO “RED Electrica” concluded that forecast performance was “phenomenal” The size and diversity of this aggregation is so great that there is a huge aggregation effect and when this is considered the performance is typical RED Electrica System Characteristics: 14,877 MW installed; 575 wind parks Average of about 30 MW capacity/park Peak Generation ~ 10,000 MW © 2009 AWS Truewind, LLC

ERCOT Forecast Performance Wind Speed Power Production

System-Wide Wind Speed Forecasts Parameter Average wind speed over all wind generation areas on the ERCOT system Provides a perspective on forecast performance that is independent of the power data issues (curtailment, availability etc.) Results Very low bias after January Low MAE and RMSE MAE: ~1.2 m/s (15% of avg) RMSE: ~ 1.5 m/s (19% of avg)

Day-Ahead Wind Speed Forecast MAE: Comparison Between Regions Comparison Periods ERCOT: 1/1/09 - 5/31/09 Alberta: 5/1/07- 4/30/08 NYISO: 1/1/09 - 5/31/09 Results ERCOT sites have lower MAE than Alberta sites but higher than NYISO sites ERCOT MAE considerations For 5 higher MAE months; annual MAE probably lower Adversely impacted by wind speed data issues Early version of ERCOT forecast system

Power Production Forecast Performance Evaluation Issues Issue: It is difficult to determine the power production values to use in the forecast evaluation due to curtailment, turbine availability and data quality issues AWST’s Performance Evaluation Datasets QC1: Locked and missing data removed QC2: QC1 & removal of data during ERCOT-specified curtailed periods QC3: QC2 & power curve check Data rejected if |Prep - Ppc| > 30% of cap Attempt to eliminate data with significant unreported availability issues Synthetic Power: Best guess at power production for all hours Reported power if deemed representative Power estimated from power curve if wind speed deemed reliable Power estimated from capacity factor of highest correlated nearby facility

System-Wide Power Forecasts: Impact of Data on Bias & MAPE Lack of consideration of curtailment periods (QC1) leads to much higher Bias & MAPE values in some months (e.g. February) Considerable positive bias remains even with QC3 data despite very low wind speed forecast bias

System-Wide Power Forecasts: Day-Ahead MAPE Comparison Comparison Specs: ERCOT: ~8000 MW; 24 hr ahead Alberta: 355 MW; 24 hr ahead CAISO: 815 MW; day-ahead (PIRP) NYISO: 688 MW; day ahead ERCOT MAPE is lower than that achieved by any forecaster in Alberta ERCOT MAPE is higher than that for CAISO and NYISO aggregates

Forecast Performance: Case Examples Examples of difficult cases from 2009

April 29, 2009 Case (forecast delivered 3:15 PM CDT April 28) Large over estimate of the power production from 1 AM to 2 PM Error caused by a poor prediction of the intensification of a storm to the north of Texas and the associated southerly winds Error most likely attributable to large scale weather prediction by the National Weather Service data and models

Surface Weather Map: 6 AM April 29, 2009 Case Surface Weather Map: 6 AM The west to east pressure gradient is too strong and therefore the winds are too strong Ensemble with input from non-NWS (e.g. Canadian) NWP data might help in these type of cases NWP forecast of 70 m wind speed (m/s) for 6 AM (used for 3 PM forecast on 28 April 2009)

May 11, 2009 Case (forecast delivered 3:15 PM CDT May 10) Large over estimate of the power production from 8 AM to 2 PM Error caused by slight errors in the placement of a slow moving frontal zone across central Texas Much of the error associated with higher than forecasted winds in the Sweetwater area

May 11, 2009 Case Frontal zone with large horizontal variation in wind speeds and direction was located near the wind generation areas Small errors in frontal placement can lead to large errors in power forecasts An example of a typical high uncertainty situation Ensemble forecast should help anticipate uncertainty and hedge “most likely” forecast

May 16, 2009 Case (forecast delivered 3:15 PM CDT May 15) Large over estimate of the power production from midnight to 5 AM as a cold front approached and passed through the region NWP models overestimated the southerly wind speeds ahead of the front

Surface Weather Map: 3 AM May 16, 2009 Case Surface Weather Map: 3 AM Position and timing of the front is very good However, the NWP model greatly overestimated the southerly wind speeds ahead of the front NWP forecast of 70 m wind speed (m/s) for 3 AM (used for 3 PM forecast on 15 May 2009)

Road to Improved Forecasts Resolve Data Issues Ensembles Regime and Event Specific Methods

Anticipated Steps to Improve Forecast Performance Resolve WGR data issues Obtain maximum information about curtailments and turbine availability Obtain meteorological data from all WGRs Resolve issues with meteorological data for many sites Implement full statistical ensemble system Dependent on quality of WGR power and meteorological data Ensembles will help most in situations with large uncertainty Take advantage of regime-based and event-based techniques Regime-based MOS Event-based MOS

Relative Role of NWP and MOS in Day-Ahead Forecast Performance: An Example The gap between the blue and red lines is a measure of the contribution of WGR-data to forecast performance via MOS The raw NWP forecasts have an average MAE improvement over persistence of ~ 40% and the MOS procedure increases that to ~ 60% The MOS improvement increment varies from week to week (probably regime effect) Weekly power production forecast MAEs for an individual WGR in the eastern US for an 8-week period in the fall of 2008. MAEs are scaled by the MAE of a persistence forecast for the entire 8-week period.

Impact of Weather Regimes Example: power production forecasts during AESO’s Alberta Pilot Project Significant winter wind regimes in Alberta were identified for the 2007-08 season Forecast performance was analyzed by regime Shallow cold air (SCA) regime occurs when slowly moving cold air from the N or E undercuts a warmer air mass typically characterized by strong W or SW winds Power production forecast error was much larger for the SCA regime than the non-SCA cases primarily because the characteristics of NWP forecast errors were quite different in SCA and non-SCA regimes © 2007 AWS Truewind, LLC

Event-Specific Forecasting Example Large system-wide ramps on multiple time scales occurred in the 10 PM to Midnight period on March 10, 2009 Caused by a southward propagating cold front Investigated as part of the development of ELRAS (ERCOT Large Ramp Alert System)

March 10-11Case: Precursor Conditions Top: Measured speed at a north Texas WGR Right: NWP-produced map of 15-minute wind speed change at 7:15 PM CDT

Use of an Event-Tracking Parameter Parameters: Distance to and amplitude of maximum positive wind speed change along a radial path from the forecast site Example: parameters indicate a consistent and coherent approach of the feature Approach can be used with NWP model output data and/or offsite measured data

Ramp events have different meteorological causes and thus the optimal parameters for tracking and predicting them are different

Summary State-of-the-art forecasts are produced from a combination of physics-based and statistical models ERCOT forecasts have been generated from an early version of the forecast system that has minimal reliance on WGR data The ERCOT forecast system will soon be expanded to include an ensemble of NWP models and MOS methods which have heavier reliance on WGR data Thus far, WGR and system data issues have significantly limited the perceived and actual performance of the ERCOT forecasts ERCOT system-wide day-ahead forecasts have lower MAPE than in the Alberta Pilot Project but higher than in California and New York Cases with high system-wide error have often been found to be associated with large errors in government-run model predictions or situations of high uncertainty (multi-model ensembles should help in these cases) A considerable amount of ERCOT-focused forecasting R&D is in progress to improve the forecasts beyond the expected gains from better WGR data and the expanded forecast system