1 Mt. Mansfield, VT. Aka….New England Days 4-7 Forecast Test Paul A. Sisson NOAA/NWS, Weather Forecast Office, Burlington, Vermont (BTV) Joseph Dellicarpini.

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

Solar Energy Forecasting Using Numerical Weather Prediction (NWP) Models Patrick Mathiesen, Sanyo Fellow, UCSD Jan Kleissl, UCSD.
Multi-Year Examination of Dense Fog at Burlington International Airport John M. Goff NOAA/NWS Burlington, VT.
SNPP VIIRS green vegetation fraction products and application in numerical weather prediction Zhangyan Jiang 1,2, Weizhong Zheng 3,4, Junchang Ju 1,2,
KMA will extend medium Range forecast from 7day to 10 day on Oct A post processing technique, Ensemble Model Output Statistics (EMOS), was developed.
2014 WAF/NWP Conference – January 2014 Precipitation and Temperature Forecast Performance at the Weather Prediction Center David Novak WPC Acting Deputy.
Upcoming Changes in Winter Weather Operations at the Weather Prediction Center (WPC) Great Lakes Operational Meteorological Workshop Dan Petersen, Wallace.
“Where America’s Climate, Weather and Ocean Services Begin” NCEP CONDUIT UPDATE Brent A Gordon NCEP Central Operations January 31, 2006.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Hydrometeorological Prediction Center HPC Medium Range Grid Improvements Mike Schichtel, Chris Bailey, Keith Brill, and David Novak.
National Centers for Environmental Prediction (NCEP) Hydrometeorlogical Prediction Center (HPC) Forecast Operations Branch Winter Weather Desk Dan Petersen.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Fly - Fight - Win 25 th Operational Weather Squadron University of Arizona 1.8km WRF Verification 2Lt Erik Neemann Weather Operations Officer 30 Apr 08.
FORECASTING EASTERN US WINTER STORMS Are We Getting Better and Why? Jeff S. Waldstreicher NOAA/NWS Eastern Region Scientific Services Division – Bohemia,
National Blend of Global Models An Overview
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Exploitation of Ensemble Output (and other operationally cool stuff) at NCEP HPC Peter C. Manousos NCEP HPC Science & Operations Officer
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Robert LaPlante NOAA/NWS Cleveland, OH David Schwab Jia Wang NOAA/GLERL Ann Arbor, MI 22 March 2011.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Verification of a Blowing Snow Model and Applications for Blizzard Forecasting Jeff Makowski, Thomas Grafenauer, Dave Kellenbenz, Greg Gust National Weather.
National Weather Service Model Flip-Flops and Forecast Opportunities Bernard N. Meisner Scientific Services Division NWS Southern Region Fort Worth, Texas.
CONSShort: an hourly short term ensemble for ESTF Jerry Wiedenfeld, ITO MKX Jeff Craven, SOO MKX CRGMAT/VTF NWS Milwaukee/Sullivan WI.
Verification of the Cooperative Institute for Precipitation Systems‘ Analog Guidance Probabilistic Products Chad M. Gravelle and Dr. Charles E. Graves.
David R. Vallee Hydrologist-in-Charge NOAA/NWS/Northeast River Forecast Center Eastern Region Flash Flood Conference.
Woody Roberts Tom LeFebvre Kevin Manross Paul Schultz Evan Polster Xiangbao Jing ESRL/Global Systems Division Application of RUA/RTMA to AWIPS and the.
Hydrometeorological Prediction Center Hydrometeorological Prediction Center 2011 Review Kenny James Forecaster With contributions from Faye Barthold, Mike.
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
Model Resolution Prof. David Schultz University of Helsinki, Finnish Meteorological Institute, and University of Manchester.
1 Agenda Topic: National Blend Presented By: Kathryn Gilbert (NWS/NCEP) Team Leads: Dave Myrick, David Ruth (NWS/OSTI/MDL), Dave Novak (NCEP/WPC), Jeff.
MODEL OUTPUT STATISTICS (MOS) TEMPERATURE FORECAST VERIFICATION JJA 2011 Benjamin Campbell April 24,2012 EAS 4480.
AMERICAN METEOROLOGICAL SOCIETY 1 Harvey Stern 22 nd Conference on Interactive Information and Processing Systems, Atlanta, 30 January, Combining.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Course Evaluation Closes June 8th.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Robert LaPlante NOAA/NWS Cleveland, OH David Schwab Jia Wang NOAA/GLERL Ann Arbor, MI 15 March 2012.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
APPLICATION OF NUMERICAL MODELS IN THE FORECAST PROCESS - FROM NATIONAL CENTERS TO THE LOCAL WFO David W. Reynolds National Weather Service WFO San Francisco.
P.1 QPF verif scores for NCEP and International Models ● 2013 ETS/bias scores for 00-24h and 24-48h forecasts (the two forecast ranges that all datasets.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
Predictability of High Impact Weather during the Cool Season over the Eastern U.S: CSTAR Operational Aspects Matthew Sardi and Jeffrey Tongue NOAA/NWS,
Kris Shrestha James Belanger Judith Curry Jake Mittelman Phillippe Beaucage Jeff Freedman John Zack Medium Range Wind Power Forecasts for Texas.
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
NWS Digital Services 1 CB Operations Committee Lynn Maximuk DSPO Operations Team Eastern Region HPC Day 4-7 Grid Proposal Review, Findings and Recommendations.
MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters.
Meteorology 485 Long Range Forecasting Friday, February 13, 2004.
MDL Requirements for RUA Judy Ghirardelli, David Myrick, and Bruce Veenhuis Contributions from: David Ruth and Matt Peroutka 1.
Judith Curry James Belanger Mark Jelinek Violeta Toma Peter Webster 1
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
1 NWS Digital Services American Meteorological Society Annual Partners Meeting San Diego, CA January 13, 2005 LeRoy Spayd National Weather Service Office.
Kathryn Gilbert, Project Manager David Myrick, Deputy Project Manager Wednesday, February 18, 2015 Co-Authors: Jeff Craven, Dave Novak, Tom Hamill, Jim.
Ensemble Forecasts Andy Wood CBRFC. Forecast Uncertainties Meteorological Inputs: Meteorological Inputs: Precipitation & temperature Precipitation & temperature.
Medium Range Forecasting at the Weather Prediction Center (WPC) –
Disclaimer The material contained in this PPT is a raw model output and research product. This is meant for scientific use.
Disclaimer The material contained in this PPT is a raw model output and research product. This is meant for scientific use.
Overview of Deterministic Computer Models
Better Forecasting Bureau
FORECASTING EASTERN US WINTER STORMS Are We Getting Better and Why?
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Naval Research Laboratory
Post Processing.
Disclaimer The material contained in this PPT is a raw model output and research product. This is meant for scientific use.
Presentation transcript:

1 Mt. Mansfield, VT

Aka….New England Days 4-7 Forecast Test Paul A. Sisson NOAA/NWS, Weather Forecast Office, Burlington, Vermont (BTV) Joseph Dellicarpini NOAA/NWS, Weather Forecast Office, Taunton, Massachusetts (BOX) Michael Ekster NOAA/NWS, Weather Forecast Office, Gray, Maine (GYX) Todd Foisy NOAA/NWS, Weather Forecast Office, Caribou, Maine (CAR) David Radell and Jeff Waldstreicher NOAA/NWS, Eastern Region Headquarters, Bohemia, New York (ERH) 2 Mt. Mansfield, VT

The real workers Matt Belk BOX Conor Lahiff BTV Margaret Curtis GYX Roman Berdes CAR Acknowledgements The NWS Central Region Blender team Tim Barker, SOO, BOI - BOIVerify 3

Outline Background Motivation The Experiment Verification Results Summary 4 “The Blend is your friend”

Background Many Studies on forecast/model consensus in general agreement that the consensus forecast tends to be the best forecast –Gyakum (1986), Fritsch et al (2000), Roebber (2014) etc. Baars and Mass (2005) Consensus/Weighted MOS – “competitive or superior to human forecasts at nearly all locations” “Human forecasts are most skillful compared to MOS during the first forecast day and for periods when temperatures differ greatly from climatology.” 5

Motivation Work efficiently Improve Accuracy Improve Consistency Focus on what we do best –(sig departures from climo and short term) Note: Don’t reinvent the wheel 6 “The Blend is your friend”

Consistency Problems 7

The Days 4-7 Forecast Experiment Oct Mar 2014 All offices start with same model initializations and blends Forecaster Surveys and Verification Does blend outperform Gridded MOS? Is the forecast more consistent? Does the method allow forecasters to be more efficient and allow time to do important things? 8

Nomenclature CONS = Consensus A consensus data set is calculated by combining/averaging a “list” of guidance data sources. 9

BC = Bias Correction Bias corrections are run on individual guidance sources using BOIVerify software. BCCONS data set is generated by bias correcting the individual components before forming the consensus. Using previous 14 day period BC 10

BLENDS Blends are a combination of various datasets of various weights. 11

NameDescription CMCnhCanadian Meteorological Center Global modelRaw Model output GFS40NWS Global Forecast System (GFS: 40km)Raw Model output ECMWFEuropean Centre Medium Range Forecast modelRaw Model output SREFNWS Short Range Ensemble Forecast (mean) Raw Model output NAM12NWS North American Model (NAM: 12km) Raw Model output NAMDNG5NWS NAM downscaled to 5km grid Raw Model output ADJMEX GFS40km ADJusted with GFS extended MOS point forecasts Raw Model background with MOS ADJMEN GFS40km ADJusted with GFS ensemble mean MOS pt forecasts Raw Model background with MOS ADJECEECMWF ADJusted with ECMWF MOS pt forecasts Raw Model background with MOS ADJECM ECMWF ADJusted with ECMWF ensemble mean MOS pt forecasts Raw Model background with MOS MOSG25GFS Gridded Model Output Statistics (2.5km) GFS MOS *Note: all forecasts mapped/downscaled to 2.5km grid Forecast Databases

NameDescriptionDatabases CONSAll Consensus Raw Models and MOS CMCnh, GFS40, ECMWF, SREF, NAM12, NAMDNG5, MOSG25, ADJMEX, ADJMEN, ADJECE, ADJECM (equal weights) BCCONSAll Consensus of Bias- corrected Raw Model and MOS databases CMCnhBC, GFS40BC, ECMWFBC, SREFBC, NAM12BC, NAMDNG5BC, MOSG25BC, ADJMEXBC, ADJMENBC, ADJECEBC, ADJECMBC HPCGuide Human Forecast by Weather Prediction Center Human adjusted Blend Official Previous Weather Forecast Office forecast Human adjusted Blend Forecast Databases

SuperBlend Previous Forecast + latest blends Official (25%), HPCGuide (25%) CONSALL (25%) BCCONSALL (25%) 50/50 Man Machine mix 14

Verification Results MaxT, MinT, T, Td, Wind Speed, PoP Use Real-Time Mesoscale Analysis adjusted by Observation as verification and for bias-correction 15

RTMA MaxT

Obs to ADJ RTMA MaxT

RESULTS 18

19

20

Day 5 MinT

Day 4 PoP Reliability Under forecast Over forecast

Survey Results Compare Before, During, & After Before: Gridded MOS method After: SuperBlend method 23

Impact of SuperBlend on Days 4-8 Forecast Process MEAN – 4.69 Mid-Test SurveyPre-Test Survey (MOSGuide) MEAN – 4.48MEAN – 3.45

Forecaster Modifications What Doesn’t Work Well SuperBlend MOSGuide

SuperBlend Performance Regime Changes / Anomalous Conditions

MEAN – 4.64 Mid-Test SurveyPre-Test Survey (MOSGuide) MEAN – 4.31MEAN – 3.16 Forecaster Overall Evaluation of Approach

Consistency Mar 2013 Less Than 80 % % % %100 % 29

QPF 30

SnowAmt 31

Summary Blends provide a more accurate and consistent starting point Forecasters survey show confidence in the method Allows more time for operations vs grid preparation 32

Bill Belichick, Head Coach New England Patriots Dec 14, Caveat: "Stats are for losers," "The final score is for winners."

Or….stated another way Caveat: Stats are helpful Clearly Communicating Accurate and Timely Weather Information is the goal. 34

The End 35

NameDescriptionDatabases CONSRawConsensus of Raw Models CMCnh, GFS40, ECMWF, SREF, NAM12, NAMDNG5 (equal weights) BCCONSRaw Consensus of Bias-corrected Raw Model databases CMCnhBC, GFS40BC, ECMWFBC, SREFBC, NAM12BC, NAMDNG5BC (equal weights) CONSMOS Consensus of MOS databases MOSG25, ADJMEX, ADJMEN, ADJECE, ADJECM, EKDMOS, ADJMAV, ADJLAV, ADJMET (equal weights) BCCONSMO S Consensus of Bias-corrected MOS databases MOSG25BC, ADJMEXBC, ADJMENBC, ADJECEBC, ADJECMBC, EKDMOSBC, ADJMAVBC, ADJLAVBC, ADJMETBC (equal weights) AllBlend 50/50 Blend of Official and CONSAll Official, CONSAll BCAllBlend 50/50 Blend of Official and BCCONSAll Official, BCCONSAll RawBlend 50/50 Blend of Official and CONSRaw Official, CONSRaw BCAllBlend 50/50 Blend of Official and BCCONSRaw Official, BCCONSRaw Forecast Databases

NameDescriptionDatabases SuperBlendPrevious Forecast + latest blendsOfficial (25%), HPCGuide (25%), CONSALL (25%), and BCCONSALL (25%) CONSAllConsensus Raw Models and MOS CMCnh, GFS40, ECMWF, SREF, NAM12, NAMDNG5, MOSG25, ADJMEX, ADJMEN, ADJECE, ADJECM,, HPCERP (equal weights) BCCONSAll Consensus of Bias-corrected Raw Model and MOS databases CMCnhBC, GFS40BC, ECMWFBC, SREFBC, NAM12BC, NAMDNG5BC, MOSG25BC, ADJMEXBC, ADJMENBC, ADJECEBC, ADJECMBC CONSRawConsensus of Raw ModelsCMCnh, GFS40, ECMWF, SREF, NAM12, NAMDNG5 (equal weights) BCCONSRaw Consensus of Bias-corrected Raw Model databases CMCnhBC, GFS40BC, ECMWFBC, SREFBC, NAM12BC, NAMDNG5BC (equal weights) CONSShort CONSRaw databases, Local WRF's and short term MOS CMCnh, GFS40, ECMWF, SREF, NAM12, NAMDNG5, RUC13BC, HIRESWarwBC, HIRESWnmmBC, BTV4, BTV12, BTV6, ADJMAV, ADJMET, ADJECE, ADJLAV (equal weights) BCCONSShort Consensus of Bias-corrected CONSShort databases CMCnhBC, GFS40BC, ECMWFBC, SREFBC, NAM12BC, NAMDNG5BC, RUC13BC, HIRESWarwBC, HIRESWnmmBC,BTV4BC, BTV12BC, BTV6BC, ADJMAVBC, ADJMETBC, ADJECEBC, ADJLAVBC (equal weights) CONSMOSConsensus of MOS databases MOSG25, ADJMEX, ADJMEN, ADJECE, ADJECM, EKDMOS, ADJMAV, ADJLAV, ADJMET (equal weights) BCCONSMOS Consensus of Bias-corrected MOS databases MOSG25BC, ADJMEXBC, ADJMENBC, ADJECEBC, ADJECMBC, EKDMOSBC, ADJMAVBC, ADJLAVBC, ADJMETBC (equal weights) AllBlend 50/50 Blend of Official and CONSAll Official, CONSAll BCAllBlend 50/50 Blend of Official and BCCONSAll Official, BCCONSAll RawBlend 50/50 Blend of Official and CONSRaw Official, CONSRaw BCAllBlend 50/50 Blend of Official and BCCONSRaw Official, BCCONSRaw

38

39

40

41