Fly - Fight - Win 25 th Operational Weather Squadron University of Arizona 1.8km WRF Verification 2Lt Erik Neemann Weather Operations Officer 30 Apr 08.

Slides:



Advertisements
Similar presentations
Trends in Estimated Mixing Depth Daily Maximums R. L. Buckley, A. Dupont, R. J. Kurzeja, and M. J. Parker Atmospheric Technologies Group Savannah River.
Advertisements

Jordan Bell NASA SPoRT Summer Intern  Background  Goals of Project  Methodology  Analysis of Land Surface Model Results  Severe weather case.
The Diurnal Temperature Smart Tool What is it and why the need for it? What is it and why the need for it? How is it implemented? How is it implemented?
Computer Weather Forecasts for Wind Energy Plants Khanh T. Tran AMI Environmental 206 Black Eagle Ave, Henderson, NV 89015(702) http://
Analysis of WRF Model Ensemble Forecast Skill for 80 m Winds over Iowa Shannon Rabideau 2010 IAWIND Conference 4/6/2010 Mentors: Eugene Takle, Adam Deppe.
Analysis of Model Forecasts of Significant Cold Fronts Using MOS Output Steve Amburn, SOO WFO Tulsa, Oklahoma.
National Weather Service Houston/Galveston Lance Wood Science and Operations Officer Assessing the Impact of SPoRT Datasets Utilizing a local WRF.
Outline of Talk Introduction Toolbox functionality Results Conclusions and future development.
Recent performance statistics for AMPS real-time forecasts Kevin W. Manning – National Center for Atmospheric Research NCAR Earth System Laboratory Mesoscale.
Validation of Storm Surge Models for the New York Bight and Long Island Regions and the Impact of Ensembles Tom Di Liberto Dr. Brian A. Colle Stony Brook.
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
Real-time WRF EnKF 36km outer domain/4km nested domain 36km outer domain/4km nested domain D1 (36km) D2 (4km)
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
1 Integrating Wind into the Transmission Grid Michael C Brower, PhD AWS Truewind LLC Albany, New York
T T18-04 Linear Trend Forecast Purpose Allows the analyst to create and analyze the "Linear Trend" forecast. The MAD and MSE for the forecast.
T T18-05 Trend Adjusted Exponential Smoothing Forecast Purpose Allows the analyst to create and analyze the "Trend Adjusted Exponential Smoothing"
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
GFS (left), ECMWF (right) 500 mb Height, Winds, and Temp 00 hr forecast valid 12z 24 Dec 2010.
T T18-06 Seasonal Relatives Purpose Allows the analyst to create and analyze the "Seasonal Relatives" for a time series. A graphical display of.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
January 29-30, 2013 simulated composite reflectivity (dBZ).January 29-30, 2013 simulated surface equivalent potential temperature (K) and winds (m/s).
26th Operational Weather Squadron Fly – Fight – Win 2d Lt Nathan Smith Weather Operations Officer 26 th Operational Weather Squadron Air Force Weather:
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Verification of a Blowing Snow Model and Applications for Blizzard Forecasting Jeff Makowski, Thomas Grafenauer, Dave Kellenbenz, Greg Gust National Weather.
Geostatistical approach to Estimating Rainfall over Mauritius Mphil/PhD Student: Mr.Dhurmea K. Ram Supervisors: Prof. SDDV Rughooputh Dr. R Boojhawon Estimating.
NOAA's NWS National Performance Measures FY 2010 – FY 2016 NOAA's NWS National Performance Measures FY 2010 – FY 2016 Government Performance Requirements.
MODEL OUTPUT STATISTICS (MOS) TEMPERATURE FORECAST VERIFICATION JJA 2011 Benjamin Campbell April 24,2012 EAS 4480.
Quality control of daily data on example of Central European series of air temperature, relative humidity and precipitation P. Štěpánek (1), P. Zahradníček.
An Experiment to Evaluate the Use of Quantitative Precipitation Forecasts from Numerical Guidance by Operational Forecasters Joshua M. Boustead and Daniel.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
Highest Confidence Forecasts Model agreement –CMC=NAM=GFS Run-to-run changes (dMod/dt) very small Models trending toward agreement –Example: OLD run: NAM=GFS.
Introduction 1. Climate – Variations in temperature and precipitation are now predictable with a reasonable accuracy with lead times of up to a year (
MAPE Publication Neil McAndrews For Bob Ryan of Deutsche Bank.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
PRICING A FINANCIAL INSTRUMENT TO GUARANTEE THE ACCURACY OF A WEATHER FORECAST Harvey Stern and Shoni S. Dawkins (Bureau of Meteorology, Australia)
Temperature Adjustment of UK Energy Statistics Iain MacLeay.
Panut Manoonvoravong Bureau of research development and hydrology Department of water resources.
T T18-07 Seasonally Adjusted Linear Trend Forecast Purpose Allows the analyst to create and analyze a "Seasonally Adjusted Linear Trend" forecast.
Due to the financial impact of statements derived from wind atlases, their verification is of high importance. Different wind atlases – reanalysis data.
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Earth-Sun System Division National Aeronautics and Space Administration WRF and the coastal marine environment Kate LaCasse SOO/SPoRT Workshop 11 July.
Overview of WG5 activities and Conditional Verification Project Adriano Raspanti - WG5 Bucharest, September 2006.
Kalman filtering at HNMS Petroula Louka Hellenic National Meteorological Service
Judith Curry James Belanger Mark Jelinek Violeta Toma Peter Webster 1
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Comparison of LM Verification against Multi Level Aircraft Measurements (MLAs) with LM Verification against Temps Ulrich Pflüger, Deutscher Wetterdienst.
Verification methods - towards a user oriented verification The verification group.
1 WRF Configuration for Ukraine  Input & boundary conditions from NCEP GFS model  3 day forecasts every 6 hours  10 km horizontal grid, 200x200 gridpoints.
Testing of the Zeng and Beljaars scheme in the TWP Michael Brunke and Xubin Zeng Department of Atmospheric Sciences The University of Arizona Tucson, Arizona.
Applied Meteorology Unit 1 Observation Denial and Performance of a Local Mesoscale Model Leela R. Watson William H. Bauman.
Statistical Evaluation of High-resolution WRF Model Forecasts Near the SF Bay Peninsula By Ellen METR 702 Prof. Leonard Sklar Fall 2014 Research Advisor:
MAPE Publication Neil McAndrews For Bob Ryan of Deutsche Bank.
Grupo de Meteorologia e Climatologia na Universidade de Aveiro Alfredo Rocha, Tiago Luna, Juan Ferreira, Ana Carvalho and João.
Medium Range Forecasting at the Weather Prediction Center (WPC) –
The Diurnal Temperature Smart Tool
Michael K. Tippett1,2, Adam H. Sobel3,4 and Suzana J. Camargo4
The elevation dependence of snowfall in the Appalachian Ridge and Valley Region of Northeast Pennsylvania Mike Evans – NOAA / NWS BGM Mike Jurewicz – NOAA.
What temporal averaging period is appropriate for MM5 verification?
Verification Overview
University of Washington Center for Science in the Earth System
New Developments in Aviation Forecast Guidance from the RUC
Real-time WRF EnKF 36km outer domain/4km nested domain D1 (36km)
T18-08 Calculate MAD, MSE Purpose Allows the analyst to create and analyze the MAD and MSE for a forecast. A graphical representation of history and.
Alex Gallagher and Dr. Robert Fovell
Central California Circulation Study
Demand Estimation Seasonal Normal
Some Verification Highlights and Issues in Precipitation Verification
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
WRAP 2014 Regional Modeling
Presentation transcript:

Fly - Fight - Win 25 th Operational Weather Squadron University of Arizona 1.8km WRF Verification 2Lt Erik Neemann Weather Operations Officer 30 Apr 08

Fly - Fight - Win 2 Purpose To analyze data from University of Arizona 1.8km WRF model for use in 25 OWS Forecast Process Provide feedback for use in model improvements Study included forecasts for winds, temperatures, and stability (surface-based CAPE) Additional purpose of deriving tool to approximate wind gust speed from model sustained winds

Fly - Fight - Win 3 Methodology Compared actual observations to 1.8km WRF forecasts GFS and NAM output from 12z model runs Three 25 OWS AZ forecast locations used: Davis-Monthan AFB (KDMA) Ft. Huachuca (KFHU) Luke AFB (KLUF)

Fly - Fight - Win 4 Winds Model wind forecasts compared to 18z and 00z observations only Data was thrown out if observed gusts or sustained winds were at less than 15 knots If gusts were reported, gusts were used; if no gusts were reported, sustained winds were used) Results are from 13 Dec 07 to 30 Apr 08 KDMA: 38 obs KFHU: 58 obs KLUF: 26 obs

Fly - Fight - Win 5 Winds Speed error determined by average difference in observed and forecast speed Ratio of observed speed to forecast speed used to determine “gust coefficient” Direction error determined by absolute value of difference between observed and forecast wind direction

Fly - Fight - Win 6 Wind Speed (knots)

Fly - Fight - Win 7 Wind Speed Trends Davis-Monthan had the best at model forecasts of wind speed, while Luke had the worst Average for all locations was an underestimation of about 10.5 knots Both DM and Ft. Huachuca observed/forecast ratios around 2:1 while Luke was closer to 4:1

Fly - Fight - Win 8 Wind Direction (degrees)

Fly - Fight - Win 9 Wind Direction Trends Forecasts most accurate for Ft. Huachuca, and worst at Luke AFB Overall, the models did fairly well for all three locations with error generally less than 35 degrees

Fly - Fight - Win 10 Temperature Compared model forecasts for Max/Min temperature to observed daily Max and Min Number of cases limited to when both GFS and NAM model runs were available from 13 Dec 07 to 30 Apr 08 (78 days) Absolute value or error used to examine accuracy Average error used to determine potential model bias

Fly - Fight - Win 11 Min Temperature (Celsius)

Fly - Fight - Win 12 Min Temperature Trends Min temperatures did the best and DM, and the worst a Luke GFS more accurate at DM and FHU; both models about the same at Luke Cold biases at DM and FHU; strong warm bias at Luke Absolute error for all locations was about 1.5° C

Fly - Fight - Win 13 Max Temperature (Celsius)

Fly - Fight - Win 14 Max Temperature Trends Max temperature did the best at Luke and DM, while performing poorly at Ft. Huachuca Both models had similar accuracy Warm biases DM and Luke, strong cold bias at FHU Absolute error for all locations was 1.5° C

Fly - Fight - Win 15 Stability Stability results inconclusive due to scarcity of positive CAPE at any location during time period More robust dataset expected during summer months

Fly - Fight - Win 16 Conclusion NAM model forecast wind direction well for all locations Wind speed gusts may be applicable for DM and Ft Huachuca with a corrective adjustment, but performed poorly at Luke Max temps most reliable at DM and Luke using NAM model