A primer on ensemble weather prediction and the use of probabilistic forecasts Tom Hamill NOAA Earth System Research Laboratory Physical Sciences Division.

Slides:



Advertisements
Similar presentations
Medium-range Ensemble Streamflow forecast over France F. Rousset-Regimbeau (1), J. Noilhan (2), G. Thirel (2), E. Martin (2) and F. Habets (3) 1 : Direction.
Advertisements

ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
New Product to Help Forecast Convective Initiation in the 1-6 Hour Time Frame Meeting September 12, 2007.
Statistical post-processing using reforecasts to improve medium- range renewable energy forecasts Tom Hamill and Jeff Whitaker NOAA Earth System Research.
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Statistical Postprocessing of Weather Parameters for a High-Resolution Limited-Area Model Ulrich Damrath Volker Renner Susanne Theis Andreas Hense.
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
Rapid Update Cycle Model William Sachman and Steven Earle ESC452 - Spring 2006.
Initial testing of longwave parameterizations for broken water cloud fields - accounting for transmission Ezra E. Takara and Robert G. Ellingson Department.
A statistical downscaling procedure for improving multi-model ensemble probabilistic precipitation forecasts Tom Hamill ESRL Physical Sciences Division.
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
CPC’s U.S. Seasonal Drought Outlook & Future Plans April 20, 2010 Brad Pugh, CPC.
Configuring, storing, and serving the next-generation global reforecast Tom Hamill NOAA Earth System Research Lab 1 NOAA Earth System Research Laboratory.
MDSS Challenges, Research, and Managing User Expectations - Weather Issues - Bill Mahoney & Kevin Petty National Center for Atmospheric Research (NCAR)
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
OUCE Oxford University Centre for the Environment “Applying probabilistic climate change information to strategic resource assessment and planning” Funded.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Lecture Oct 18. Today’s lecture Quiz returned on Monday –See Lis if you didn’t get yours –Quiz average 7.5 STD 2 Review from Monday –Calculate speed of.
Verification of ensembles Courtesy of Barbara Brown Acknowledgments: Tom Hamill, Laurence Wilson, Tressa Fowler Copyright UCAR 2012, all rights reserved.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Data mining in the joint D- PHASE and COPS archive Mathias.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
Geneva 2-3 December 2011 Proposal for a sub-seasonal research data set.
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Notes on reforecasting and the computational capacity needed for future SREF systems Tom Hamill NOAA Earth System Research Lab presentation for 2009 National.
1 Using reforecasts for probabilistic forecast calibration Tom Hamill & Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO NOAA.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Ping Zhu, AHC5 234, Office Hours: M/W/F 10AM - 12 PM, or by appointment M/W/F,
Modern Era Retrospective-analysis for Research and Applications: Introduction to NASA’s Modern Era Retrospective-analysis for Research and Applications:
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
The NOAA Hydrology Program and its requirements for GOES-R Pedro J. Restrepo Senior Scientist Office of Hydrologic Development NOAA’s National Weather.
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
Additional data sources and model structure: help or hindrance? Olga Semenova State Hydrological Institute, St. Petersburg, Russia Pedro Restrepo Office.
I n t e g r i t y - S e r v i c e - E x c e l l e n c e Air Force Weather Agency Probabilistic Lightning Forecasts Using Deterministic Data Evan Kuchera.
APPLICATION OF NUMERICAL MODELS IN THE FORECAST PROCESS - FROM NATIONAL CENTERS TO THE LOCAL WFO David W. Reynolds National Weather Service WFO San Francisco.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
Uncertainty Quantification in Climate Prediction Charles Jackson (1) Mrinal Sen (1) Gabriel Huerta (2) Yi Deng (1) Ken Bowman (3) (1)Institute for Geophysics,
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
NCEP Vision: First Choice – First Alert – Preferred Partner NOAA Hydrometeorological TestBed at the NCEP Hydrometeorological Prediction Center (HPC) Faye.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Alan F. Hamlet Andy Wood Dennis P. Lettenmaier JISAO Center for Science in the Earth System Climate Impacts Group and the Department.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Common verification methods for ensemble forecasts
Stratiform Precipitation Fred Carr COMAP NWP Symposium Monday, 13 December 1999.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
EVALUATION OF A GLOBAL PREDICTION SYSTEM: THE MISSISSIPPI RIVER BASIN AS A TEST CASE Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier Civil and.
Probabilistic Forecasts Based on “Reforecasts” Tom Hamill and Jeff Whitaker and
Tom Hamill and Michael Scheuerer ESRL, Physical Sciences Division
Radiative-Convective Model. Overview of Model: Convection The convection scheme of Emanuel and Živkovic-Rothman (1999) uses a buoyancy sorting algorithm.
NOAA Northeast Regional Climate Center Dr. Lee Tryhorn NOAA Climate Literacy Workshop April 2010 NOAA Northeast Regional Climate.
Figures from “The ECMWF Ensemble Prediction System”
Improving Numerical Weather Prediction Using Analog Ensemble Presentation by: Mehdi Shahriari Advisor: Guido Cervone.
I. Sanchez, M. Amodei and J. Stein Météo-France DPREVI/COMPAS
Verifying and interpreting ensemble products
Precipitation Products Statistical Techniques
Convective Scale Modelling Humphrey Lean et. al
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
N. Voisin, J.C. Schaake and D.P. Lettenmaier
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Presentation transcript:

A primer on ensemble weather prediction and the use of probabilistic forecasts Tom Hamill NOAA Earth System Research Laboratory Physical Sciences Division NOAA Earth System Research Laboratory Presentation to 2011 Albany Utility Wind Integration Workshop 1

Uncertainty is inevitable, and “state dependent” 2  A toy dynamical system that illustrates the problem of “deterministic chaos” that we encounter in weather prediction models. Forecast uncertainty grows more quickly for some initial conditions than others. from Tim Palmer’s chapter in 2006 book “Predictability of Weather and Climate” The Lorenz (1963) model

Amount of uncertainty depends on weather regime. 3 High wind uncertainty; timing & strength of trough. Lower wind uncertainty; no major weather systems around.

Forecast uncertainty also contributed by model imperfections 4 “ Parameterizations ” Much of the weather occurs at scales smaller than those resolved by the weather forecast model. A forecast model must treat, or “parameterize” the effects of the sub-gridscale on the resolved scale. Problems: (1) no variability at scales smaller than the box from this model; (2) parameterizations are approximations, and often not good ones.

“Ensemble prediction” 5 Perhaps different forecast models, or built-in stochastic effects to account for model uncertainty

Desirable properties of probabilistic forecasts & common methods to evaluate them. Reliability/calibration: when you say X%, it will happen X% of the time. – calibration: observed and ensemble considered samples from the same probability distribution Specificity of the forecast, i.e., sharpness. Deviations from the climatological. We want forecasts as sharp as they can be as long as they’re reliable. 6

Reliability diagrams (built with lots of samples) 7

Reliability diagrams Curve tells you what the observed frequency was each time you forecast a given probability. This curve ought to lie along y = x line. Here this shows the ensemble-forecast system over-forecasts the probability of light rain. Ref: Wilks text, Statistical Methods in the Atmospheric Sciences 8

Reliability diagrams Inset histogram tells you how frequently each probability was issued. Perfectly sharp: frequency of usage populates only 0% and 100%. Ref: Wilks text, Statistical Methods in the Atmospheric Sciences 9

Reliability diagrams BSS = Brier Skill Score BS() measures the Brier Score, which you can think of as the squared error of a probabilistic forecast. Perfect: BSS = 1.0 Climatology: BSS = 0.0 Ref: Wilks text, Statistical Methods in the Atmospheric Sciences 10

Sharpness “Sharpness” measures the specificity of the probabilistic forecast. Given two reliable forecast systems, the one producing the sharper forecasts is preferable. Might be measured with standard deviation of ensemble about its mean. But: don’t want sharp if not reliable. Implies unrealistic confidence. 11

“Spread-error” relationships are important, too. ensemble-mean error from a sample of this pdf on avg. should be low. ensemble-mean error should be moderate on avg. ensemble-mean error should be large on avg. Small-spread ensemble forecasts should have less ensemble-mean error than large-spread forecasts, in some sense a conditional reliability dependent upon amount of sharpness. 12

General benefits from use of ensembles Averaging of many forecasts reduces error. Proper use of probabilistic information permits better decisions to be made based on risk tolerance. 13

Dangers of “ensemble averaging” (smoothes out meteorological features) 14 time wind speed individual members decision threshold time wind speed decision threshold ensemble average Here the ensemble tells you something useful… a wind ramp is coming, but the exact timing is uncertain. Information lost if you boil it down to its average.

Two general methods of providing you with useful local probabilistic information Dynamical downscaling (run high-resolution ensemble systems to provide local detail). Statistical downscaling (post-process coarser resolution model to fill in the local detail and missing time scales). 15

Potential value of dynamic downscaling km SREF P > 0.5”4-km SSEF P > 0.5 “Verification An example from high-resolution ensembles run during the NSSL-SPC Hazardous Weather Test Bed, forecast initialized 20 May 2010 With warm-season precipitation, coarse resolution and parameterized convection of operational SREF clearly is inferior to the 4-km, resolved convection in SSEF.

We still have a way to go to provide sharp, reliable forecasts directly from hi-res. ensembles. Case: Arkansas floods 17 SREF P > 2.0”4-km SSEF P > 2.0 “Verification (radar QPE) An example from NSSL-SPC Hazardous Weather Test Bed, forecast initialized 10 June A less than 30% probability of > 2 inches rainfall from SSEF, while better than SREF, probably does not set off alarm bells in forecasters’ heads.

Statistical downscaling Direct probabilistic forecasts from a global model may be unreliable, and perhaps too coarse time granularity for your purposes (winds every 3 h). Assume you have: – a long time series of wind measurements. – a long time series of forecasts from a fixed model that hasn’t changed (“reforecasts”) Can correct for discrepancies between forecast and observed using past data, adjust today’s forecast, quantify uncertainty. Proven technique in “MOS” – what’s new here is the especially long time series of ensemble forecasts, helpful for making statistical adjustments in long-lead and rare events. 18

Potential value of statistical downscaling using “reforecasts” 19 Post-processing with large training data set can permit small-scale detail to be inferred from large-scale, coarse model fields.

20 An example of a statistical correction technique using those reforecasts For each pair (e.g. red box), on the left are old forecasts that are somewhat similar to this day’s ensemble-mean forecast. The boxed data on the right, the analyzed precipitation for the same dates as the chosen analog forecasts, can be used to statistically adjust and downscale the forecast. Analog approaches like this may be particularly useful for hydrologic ensemble applications, where an ensemble of weather realizations is needed as inputs to a hydrologic ensemble streamflow system. Today’s forecast (& observed)

A next-generation reforecast Model: NCEP GFS ensemble that will be operational later in Reforecast: at 00Z, compute full 10-member forecast, every day, for last 30 years out to 16 days. Continue to generate real-time forecasts with this model for next ~5 years. Reforecasts computed by late More details in supplementary slides. 21

Making reforecast data available to you Store 130 TB (fast access) of “important” agreed-upon subset of data. – Will design software to serve this out to you in several manners (http, ftp, OPeNDAP, etc.). Archive full 00Z reforecasts and initial conditions ~=778 TB. DOE expected to store this for us (slow access). 22

Expected fields in the “fast” archive Mean and every member For wind energy, 10-m and 80 m winds, 80-m wind power. 3-hourly out to 72h, then 6-hourly thereafter. Lots of other data (details in backup slides) 23

Conclusions Ensembles may provide significant value- added information to you. I’m interested in talking with you more to understand how ensembles information (especially reforecasts) can be tailored to help with your decision making. 24

Backup slides 25

Expected fields we’ll save in the reforecast “fast” archive Mandatory level data: – Geopotential height, temperature, u, v, at 1000, 925, 850, 700, 500, 300, 250, 200, 100, 50, and 10 hPa. – Specific humidity at 1000, 925, 850, 700, 500, 300, 250, 200 PV (K m 2 kg -1 s -1 ) on θ = 320K surface. Wind components, potential temperature on 2 PVU surface. 26

Fixed fields to save once – field capacity – wilting point – land-sea mask – terrain height 27

Proposed single-level fields for “fast” archive Surface pressure (Pa) Sea-level pressure (Pa) Surface (2-m) temperature (K) Skin temperature (K) Maximum temperature since last storage time (K) Minimum temperature since last storage time (K) Soil temperature (0-10 cm; K) Volumetric soil moisture content (proportion, 0-10 cm) – Total accumulated precipitation since beginning of integration (kg/m 2 ) Precipitable water (kg/m 2, vapor only, no condensate) Specific humidity at 2-m AGL (kg/kg; instantaneous) – Water equivalent of accumulated snow depth (kg/m 2 ) – CAPE (J/kg) CIN (J/kg) Total cloud cover (%) 10-m u- and v-wind component (m/s) 80-m u- and v-wind component (m/s) Sunshine duration (min) Snow depth water equivalent (kg/m 2 ) Runoff Solid precipitation Liquid precipitation Vertical velocity (850 hPa) Geopotential height of surface Wind power (=windspeed 3 at 80 m*density) 28

Proposed fields for “fast” archive Fluxes (W/m 2 ; average since last archive time) – sensible heat net flux at surface – latent heat net flux at surface – downward long-wave radiation flux at surface – upward long-wave radiation flux at surface – upward short-wave radiation at surface – downward short-wave radiation flux at surface – upward long-wave radiation at nominal top – ground heat flux. 29

Uncalibrated ensemble? Here, the observed is outside of the range of the ensemble, which was sampled from the pdf shown. Is this a sign of a poor ensemble forecast? 30

Uncalibrated ensemble? Here, the observed is outside of the range of the ensemble, which was sampled from the pdf shown. Is this a sign of a poor ensemble forecast? 31 You just don’t know…it’s only one sampleYou just don’t know; it’s only one sample.

Rank 1 of 21Rank 14 of 21 Rank 5 of 21Rank 3 of 21 32

Rank histograms Happens when observed is indistinguishable from any other member of the ensemble. Ensemble hopefully is reliable. Happens when observed too commonly is lower than the ensemble members. Happens when there are either some low and some high biases, or when the ensemble doesn’t spread out enough. With lots of samples from many situations, can evaluate the characteristics of the ensemble. ref: Hamill, MWR, March