Alex Gallagher and Dr. Robert Fovell

Slides:



Advertisements
Similar presentations
ECMWF long range forecast systems
Advertisements

Literature Review Kathryn Westerman Oliver Smith Enrique Hernandez Megan Fowler.
THE OKLAHOMA MESONET: EVOLUTION FROM REAL-TIME WEATHER NETWORK TO CLIMATE NETWORK Christopher Fiebrich and Kevin A. Kloesel Oklahoma Climatological Survey.
UU Unitah Basin WRF Model Configuration (same as DEQ) See Alcott and Steenburgh 2013 for further details on most aspects of this numerical configuration:
Jared H. Bowden Saravanan Arunachalam
Hongli Jiang 1,3, Michelle Harrold 2,3 and Jamie Wolff 2,3 1: NOAA/ESRL/CIRA, Colorado State University 2: NCAR/Research Applications Laboratory 3: Developmental.
Recent performance statistics for AMPS real-time forecasts Kevin W. Manning – National Center for Atmospheric Research NCAR Earth System Laboratory Mesoscale.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Towards an Ensemble Forecast Air Quality System for New York State Michael Erickson 1, Brian A. Colle 1, Christian Hogrefe 2,3, Prakash Doraiswamy 3, Kenneth.
1 Integrating Wind into the Transmission Grid Michael C Brower, PhD AWS Truewind LLC Albany, New York
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Fly - Fight - Win 25 th Operational Weather Squadron University of Arizona 1.8km WRF Verification 2Lt Erik Neemann Weather Operations Officer 30 Apr 08.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Jamie Wolff Jeff Beck, Laurie Carson, Michelle Harrold, Tracy Hertneky 15 April 2015 Assessment of two microphysics schemes in the NOAA Environmental Modeling.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Climatology and Predictability of Cool-Season High Wind Events in the New York City Metropolitan and Surrounding Area Michael Layer School of Marine and.
National Weather Service Model Flip-Flops and Forecast Opportunities Bernard N. Meisner Scientific Services Division NWS Southern Region Fort Worth, Texas.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
An investigation of slow-moving East Coast Winter Storms during the past 55 years Jase Bernhardt Northeast Regional Climate Center Cornell University.
Seasonal Modeling (NOAA) Jian-Wen Bao Sara Michelson Jim Wilczak Curtis Fleming Emily Piencziak.
The climate and climate variability of the wind power resource in the Great Lakes region of the United States Sharon Zhong 1 *, Xiuping Li 1, Xindi Bian.
Kyle J. Pallozzi and Lance F. Bosart
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
1 Results from Winter Storm Reconnaissance Program 2008 Yucheng SongIMSG/EMC/NCEP Zoltan TothEMC/NCEP/NWS Sharan MajumdarUniv. of Miami Mark ShirleyNCO/NCEP/NWS.
Insights from CMC BAMS, June Short Range The SPC Short-Range Ensemble Forecast (SREF) is constructed by post-processing all 21 members of the NCEP.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
Ensemble variability in rainfall forecasts of Hurricane Irene (2011) Molly Smith, Ryan Torn, Kristen Corbosiero, and Philip Pegion NWS Focal Points: Steve.
Summer Tornadoes – NWA 2015 Statistical Severe Convective Risk Assessment Model (SSCRAM) (Hart & Cohen, 2015) SPC Mesoanalysis Data Every hour from
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Medium Range Forecasting at the Weather Prediction Center (WPC) –
Does nudging squelch the extremes in regional climate modeling?
ASOS Background Observation based 1st order stations NWS/FAA measurements replaced by ASOS in mid-1990s NWS/FAA Use of METAR reporting system began July.
A look at hurricane force extratropical cyclones
Impact of AMDAR/RS Modelling at the SAWS
Update on the Northwest Regional Modeling System 2013
Systematic timing errors in km-scale NWP precipitation forecasts
University of Washington Ensemble Systems for Probabilistic Analysis and Forecasting Cliff Mass, Atmospheric Sciences University of Washington.
What is in our head…. Spatial Modeling Performance in Complex Terrain Scott Eichelberger, Vaisala.
Coupled atmosphere-ocean simulation on hurricane forecast
West Virginia Floods June 2016 NROW 2016 Albany NY
Analysis of WRF Model Ensemble Forecast Skill for 80 m over Iowa
Dan Petersen Bruce Veenhuis Greg Carbin Mark Klein Mike Bodner
An Analysis of Large Track Error North Atlantic Tropical Cyclones.
Model Post Processing.
Robert Conrick, Qi Zhong, and Cliff Mass
Winter storm forecast at 1-12 h range
Forecast Pressure.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Verification Overview
Observation uncertainty in verification
The Importance of Reforecasts at CPC
New Developments in Aviation Forecast Guidance from the RUC
Mareile Wolff1 with input from Cristian Lussana2 and Thomas Nipen3
Real-time WRF EnKF 36km outer domain/4km nested domain D1 (36km)
2007 Mei-yu season Chien and Kuo (2009), GPS Solutions
Coastal Atmospheric Modeling for both Operational and Research Applications using the Weather Research Forecast (WRF) Model.
Cliff Mass and David Ovens University of Washington
Crowdsourcing Urban Wind
An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends.
Extra-Tropical Storm Surge (ETSS 2.1)
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
Consortium Meeting June 6, 2019
New York State Mesoscale Weather Network
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
WRAP 2014 Regional Modeling
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Alex Gallagher and Dr. Robert Fovell Understanding Differences in New York State Mesonet (NYSM) and ASOS Wind Observations Alex Gallagher and Dr. Robert Fovell SUNY Albany DAES NROW XIX 11/8/2018

ASOS vs. NYSM Comparison Specs ASOS 1min NYSM Anemometer Type Sonic Mounting Height 10m Averaging Interval 2min 5min Sampling Interval 3s for gust, 5s for sustained 3s Number of Stations 899 CONUS, 36 NY 125 Typical Location Airports Varies

CONUS ASOS Locations: 800+ Stations

NY ASOS Locations: 36 Stations

NYSM Locations: 125 Stations

Big Weather Web (BWW) Ensemble CONUS+ at 20 km WRF v. 3.7.1 Initialized with 00Z GFS (or GEFS) 47 ensemble members (13 from UAlbany) Since January 2016 Control: MYJ PBL Noah LSM Kain-Fritsch cumulus RRTMG radiation Thompson microphysics Model evaluation with MET on 3-hourly outputs

ASOS Network-Averaged 10m Winds: BWW Control Run ASOS 1min-data, 821 stations 7 months of 84hr forecasts, February-August 2018 Avg Total Bias +0.27 m/s, 00Z Bias +0.07 m/s, 12Z Bias +0.52m/s

36 NYS ASOS Stations: 7 Month Avg ASOS NY, 36 stations Forecast bias grows gradually over time but comparable to CONUS.

124 NYSM Stations: 7 Month Avg NYSM winds weaker than ASOS winds for all forecast hours. NYSM Average Total Bias +1.49 m/s, 00Z Bias +1.49 m/s, 12Z Bias +1.47 m/s Forecasts are comparable.

Hypothesis 1: Siting/Obstruction NYSM stations are sited in locations where obstructions are more prominent, causing wind speeds to be systematically lower than at ASOS. These obstructions have a greater impact on mean wind speeds than gusts. Obstructions in NYS have a seasonal cycle to them and as such so will the degree to which wind speeds are slowed. Site obstruction is revealed by network averaged gust factors (GFs). 𝐺𝐹= 𝐺𝑢𝑠𝑡 𝑀𝑒𝑎𝑛 𝑊𝑖𝑛𝑑

Sustained Wind Speed Comparison July 2017 Sustained Wind Speed Comparison Gust Speed Comparison Slope=0.701 𝑅 2 =0.751 Slope=0.931 𝑅 2 =0.764 NYSM 30% Slower! NYSM sustained wind speeds are ≈30% slower than ASOS. NYSM and ASOS gusts are nearly one to one.

Sustained Wind Speed Comparison January 2018 Sustained Wind Speed Comparison Gust Speed Comparison Slope=0.788 𝑅 2 =0.797 Slope=1.034 𝑅 2 =0.823 NYSM 21% Slower! Obstructions are less effective in winter (no leaves) and wind speeds are ≈21% slower than ASOS. Gusts are still highly comparable.

Creating a Direct Comparison Specs ASOS 1min NYSM Anemometer Type Sonic Mounting Height 10m Averaging Interval 2min 5min Sampling Interval 3s for gust, 5s for sustained 3s Number of Stations 899 CONUS, 36 NY 125 Typical Location Airports Varies The difference in averaging interval can be addressed by creating observations in the style of ASOS-onemin using the raw 3sec samples from mesonet stations.

Sustained Wind Speed Comparison July 2017 (ASOS Style) Sustained Wind Speed Comparison Gust Speed Comparison Slope=0.704 𝑅 2 =0.855 Slope=0.836 𝑅 2 =0.871 Still 30% Slower! Sustained wind speeds are largely unaffected by modifying averaging interval. NYSM Gust speeds are slightly decreased.

Sustained Wind Speed Comparison January 2018 (ASOS Style) Sustained Wind Speed Comparison Gust Speed Comparison Slope=0.789 𝑅 2 =0.826 Slope=0.936 𝑅 2 =0.846 Still 21% Slower! Sustained Wind: decreasing number of samples has no implicit tendency to increase/decrease the mean. Gust: shrinking number of samples reduces the variance of a collection of samples and likelihood of a larger maximum.

Seasonal Comparison Aug-17 Oct-17 Dec-17 Feb-18 Apr-18 The NYSM wind speeds are more comparable to ASOS between Oct-Feb, when major obstructions (trees) are expected to be less effective. Although gust and mean wind ratios follow the same pattern, gust ratios remain closer to unity throughout.

Gust Factor Comparison Month ASOS NYSM (ASOS Style) NYSM July 2017 1.3 1.54 1.72 January 2018 1.29 1.53 1.69 Altering the averaging interval to match ASOS eliminates all significant differences between ASOS and NYSM except station siting. NYSM GFs are smaller and closer to that of ASOS after changing the averaging interval but remain significantly larger. This helps reaffirm that GF contains information about site exposure.

ASOS KBTV GF=1.27 200m 100m

ASOS COPA GF=1.72 200m 100m

Sustained Wind Speed Comparison July 2017 (Best NYSM) Sustained Wind Speed Comparison Gust Speed Comparison Slope=0.765 𝑅 2 =0.824 Slope=0.883 𝑅 2 =0.839 24% Slower Only the best exposed NYSM stations, according to WMO siting standards, are used for comparison. NYSM winds are slightly faster, sustained 30%24%, but significant differences remain.

Sustained Wind Speed Comparison January 2018 (Best NYSM) Sustained Wind Speed Comparison Gust Speed Comparison Slope=0.845 𝑅 2 =0.713 Slope=0.977 𝑅 2 =0.741 15% Slower NYSM sustained winds go from 21%15% slower than ASOS. Even the best exposed NYSM stations are systematically slower than ASOS.

Correcting for Exposure We can correct NYSM observations for higher exposure by leveraging network average GFs. Gusts are less effected by obstructions than mean wind. Gusts between the two networks are largely comparable, especially the unmodified NYSM observations. 𝑁𝑌𝑆𝑀 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑒𝑑 = 𝑁𝑌𝑆𝑀 𝑜𝑟𝑖𝑔 × 𝐺𝐹 𝑁𝑌𝑆𝑀 𝐺𝐹 𝐴𝑆𝑂𝑆 𝐺𝐹 𝐴𝑆𝑂𝑆 𝐺𝐹 𝑁𝑌𝑆𝑀 = 1.3 1.7 =0.765≈0.701−0.788

July 2017 (Adj. NYSM) Original Sustained Wind Comparison Adjusted Sustained Wind Comparison After correction NYSM are much closer to ASOS, 30%8% slower. Remaining differences are likely due to the difference in wind climatology or larger scale influences (topography).

January 2018 (Adj. NYSM) Original Sustained Wind Comparison Adjusted Sustained Wind Comparison The correction makes winds comparable across seasons. This correction has adjusted NYSM winds to what they would theoretically have been if they were as well exposed to winds as ASOS.

January 2018 (Adj. NYSM) The GF correction method does a decent job of raising NYSM observed wind speeds. Avg. bias +1.49 m/s+0.87 m/s Remaining difference between forecast and observations is likely due to model short coming and requires further investigation.

Concluding Remarks Forecasts shown to be skillful for ASOS are largely positively biased when verified against NYSM. Station siting is the most significant difference between the two networks. Network averaged sustained wind speeds are 25-30% slower in NYSM than ASOS, while gusts are mostly similar. Differences between the two networks remain substantial even after changing the NYSM averaging interval to match ASOS. Even the best sited/exposed NYSM stations are systematically under-reporting wind compared to ASOS. Using the ratio of the network average GFs NYSM mean wind observations can be corrected for exposure and more closely resemble those of ASOS.