MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

ECMWF long range forecast systems
A Brief Guide to MDL's SREF Winter Guidance (SWinG) Version 1.0 January 2013.
KMA will extend medium Range forecast from 7day to 10 day on Oct A post processing technique, Ensemble Model Output Statistics (EMOS), was developed.
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass Adrian Raftery, Susan Joslyn, Tilmann Gneiting and others University of Washington.
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
GFS MOS Wind Guidance: Problem Solved? Eric Engle and Kathryn Gilbert MDL/Statistical Modeling Branch 15 May 2012.
Similar Day Ensemble Post-Processing as Applied to Wildfire Threat and Ozone Days Michael Erickson 1, Brian A. Colle 1 and Joseph J. Charney 2 1 School.
452 Precipitation. Northwest Weather = Terrain + Ocean Influence.
Temperature Prediction. ASOS Temperature/Humidity Senor.
Patrick Tewson University of Washington, Applied Physics Laboratory Local Bayesian Model Averaging for the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Hydrometeorological Prediction Center HPC Medium Range Grid Improvements Mike Schichtel, Chris Bailey, Keith Brill, and David Novak.
New Local Climate Outlook Products from the NWS Andrea Bair NOAA/NWS Western Region Headquarters Climate Services Program Manager.
Temperature Prediction. ASOS Temperature/Humidity Senor.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Evaluation of a Mesoscale Short-Range Ensemble Forecasting System over the Northeast United States Matt Jones & Brian A. Colle NROW, 2004 Institute for.
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
Consortium Meeting June 3, Thanks Mike! Hit Rates.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex Lisa Goddard and Simon Mason International Research Institute for.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Performance of the MOGREPS Regional Ensemble
Towards the Usage of Post-processed Operational Ensemble Fire Weather Indices over the Northeast United States Michael Erickson 1, Brian A. Colle 1, and.
National Weather Service Model Flip-Flops and Forecast Opportunities Bernard N. Meisner Scientific Services Division NWS Southern Region Fort Worth, Texas.
Verification of the Cooperative Institute for Precipitation Systems‘ Analog Guidance Probabilistic Products Chad M. Gravelle and Dr. Charles E. Graves.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
National Weather Service Application of CFS Forecasts in NWS Hydrologic Ensemble Prediction John Schaake Office of Hydrologic Development NOAA National.
OUTLINE Current state of Ensemble MOS
Combining CMORPH with Gauge Analysis over
MODEL OUTPUT STATISTICS (MOS) TEMPERATURE FORECAST VERIFICATION JJA 2011 Benjamin Campbell April 24,2012 EAS 4480.
Quality control of daily data on example of Central European series of air temperature, relative humidity and precipitation P. Štěpánek (1), P. Zahradníček.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Course Evaluation Closes June 8th.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
CC Hennon ATMS 350 UNC Asheville Model Output Statistics Transforming model output into useful forecast parameters.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
NWS Calibration Workshop, LMRFC March, 2009 slide - 1 Analysis of Temperature Basic Calibration Workshop March 10-13, 2009 LMRFC.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
The LAMP/HRRR MELD FOR AVIATION FORECASTING Bob Glahn, Judy Ghirardelli, Jung-Sun Im, Adam Schnapp, Gordana Rancic, and Chenjie Huang Meteorological Development.
Ensembling Medium Range Forecast MOS GUIANCE By Richard H. Grumm National Weather Service State College PA and Robert Hart The Florida State University.
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters.
A study on the spread/error relationship of the COSMO-LEPS ensemble Purpose of the work  The spread-error spatial relationship is good, especially after.
Judith Curry James Belanger Mark Jelinek Violeta Toma Peter Webster 1
Using Ensemble Model Output Statistics to Improve 12-Hour Probability of Precipitation Forecasts John P. Gagan NWS Springfield, MO Chad Entremont NWS Jackson,
2008 AT540 Forecast Contest! Compete against your classmates and TA for bragging rights and a chance to win extra points on your final lab grade! Apply.
National Oceanic and Atmospheric Administration’s National Weather Service Colorado Basin River Forecast Center Salt Lake City, Utah 11 The Hydrologic.
ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course Mar 2016.
Encast Global forecasting.
of Temperature in the San Francisco Bay Area
Model Post Processing.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
452 Precipitation.
The Importance of Reforecasts at CPC
Atmospheric Sciences 452 Spring 2019
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between human and MOS forecasts.

Cool Season Mi. Temp – 12 UTC Cycle Average Over 80 US stations

Prob. Of Precip.– Cool Season (0000/1200 UTC Cycles Combined)

MOS Won the Department Forecast Contest in 2003 For the First Time!

Average or Composite MOS There has been some evidence that an average or consensus MOS is even more skillful than individual MOS output. Vislocky and Fritsch (1997), using data, found that an average of two or more MOS’s (CMOS) outperformed individual MOS’s and many human forecasters in a forecasting competition.

Some Questions How does the current MOS performance…driven by far superior models… compare with NWS forecasters around the country. How skillful is a composite MOS, particularly if one weights the members by past performance? How does relative human/MOS performance vary by forecast projection, region, large one-day variation, or when conditions vary greatly from climatology? Considering the results, what should be the role of human forecasters?

This Study August – August (12 months). 29 stations, all at major NWS Weather Forecast Office (WFO) sites. Evaluated MOS predictions of maximum and minimum temperature, and probability of precipitation (POP).

National Weather Service locations used in the study.

Forecasts Evaluated NWS Forecast by real, live humans EMOS: Eta MOS NMOS: NGM MOS GMOS: GFS MOS CMOS: Average of the above three MOSs WMOS: Weighted MOS, each member is weighted by its performance during a previous training period (ranging from days, depending on each station). CMOS-GE: A simple average of the two best MOS forecasts: GMOS and EMOS

The Approach: Give the NWS the Advantage! 08-10Z-issued forecast from NWS matched against previous 00Z forecast from models/MOS. –NWS has 00Z model data available, and has added advantage of watching conditions develop since 00Z. –Models of course can’t look at NWS, but NWS looks at models. NWS Forecasts going out 48 (model out 60) hours, so in the analysis there are: –Two maximum temperatures (MAX-T), –Two minimum temperatures (MIN-T), and –Four 12-hr POP forecasts.

Temperature Comparisons

MAE (  F) for the seven forecast types for all stations, all time periods, 1 August 2003 – 1 August Temperature

MAE for each forecast type during periods of large temperature change (10  F over 24-hr), 1 August 2003 – 1 August Includes data for all stations. Large one-day temp changes

MAE for each forecast type during periods of large departure (20  F) from daily climatological values, 1 August 2003 – 1 August 2004.

Number of days each forecast is the most accurate, all stations. In (a), tie situations are counted only when the most accurate temperatures are exactly equivalent. In (b), tie situations are cases when the most accurate temperatures are within 2  F of each other. Looser Tie Definition

Number of days each forecast is the least accurate, all stations. In (a), tie situations are counted only when the least accurate temperatures are exactly equivalent. In (b), tie situations are cases when the least accurate temperatures are within 2  F of each other. Looser Tie Definition

Time series of MAE of MAX-T for period one for all stations, 1 August 2003 – 1 August The mean temperature over all stations is shown with a dotted line. 3-day smoothing is performed on the data. Highly correlated time series

Time series of bias in MAX-T for period one for all stations, 1 August 2003 – 1 August Mean temperature over all stations is shown with a dotted line. 3-day smoothing is performed on the data. Cold spell

MAE for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region. MOS Seems to have the most problems at high elevation stations.

Bias for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region.

Precipitation Comparisons

Brier Scores for Precipitation for all stations for the entire study period.

Brier Score for all stations, 1 August 2003 – 1 August day smoothing is performed on the data.

Brier Score for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region. Precipitation

Reliability diagrams for period 1 (a), period 2 (b), period 3 (c) and period 4 (d).

NWS Main MOS site:

Ensemble MOS forecasts are based on ensemble runs of the GFS model included in the 0000 UTC ensemble suite each day. These runs include the. operational GFS, a control version of the GFS (run at lower resolution), and 10 pairs (positive and negative) of bred perturbation runs (20 members). The operational GFS MOS prediction equations are applied to the output from each of the ensemble runs to produce separate bulletins in the same format as the operational message. Ensemble MOS

Gridded MOS The NWS needs MOS on a grid for many reasons, including for use in their IFPS analysis/forecasting system. The problem is that MOS is only available at station locations. A recent project is to create Gridded MOS. Takes MOS at individual stations and spreads it out based on proximity and height differences. Also does a topogaphic correction dependent on reasonable lapse rate.

Gridded MOS SITE

Current “Operational” Gridded MOS

Grid-Based Model Bias Removal: Model biases are a reality We need to get rid of them

Grid-Based Bias Removal In the past, the NWS has attempted to remove these biases only at observation locations (MOS, Perfect Prog)--exception…gridded mos recently Removal of systemic model bias on forecast grids is needed. Why? –All models have significant systematic biases –NWS and others want to distribute graphical forecasts on a grid (IFPS) –People and applications need forecasts everywhere…not only at ASOS sites –Important post-processing step for ensembles

A Potential Solution: Obs-Based Grid Based Bias Removal Based on observations, not analyses. Base the bias removal on observation-site land use category, elevation, and proximity. Land use and elevation are the key parameters the control physical biases.

Spatial differences in bias

The Method  Calculate model biases at observation locations by interpolating model forecasts to observation sites.  Identify a land use, elevation, and lat-lon for each observation site. Calculate biases at these stations hourly. Thus, one has a data base of biases. For every forecast hour: At every forecast grid point search for nearby stations of similar land use and elevation and for which the previous forecast value is close to that being forecast at the grid point in question.. –E.g., if the forecast temperature was 60, only use biases for nearby stations of similar land-use/elevation associated with forecasts of Collect a sufficient number of these (using closest ones first) to average out local effects (roughly a half dozen). Average the biases for these sites and apply the bias correction to the forecast.

Raw 12-h ForecastBias-Corrected Forecast

Sal Lake City

Bozeman

The End