Automated Quality Control of Meteorological Observations for MM5 Model Verification Jeff Baars Cliff Mass University of Washington, Seattle, Washington.

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

Igor Zahumenský (Slovakia, CBS ET/AWS)
1 Alberta Agriculture and Food (AF) Surface Meteorological Stations and Data Quality Control Procedures.
1 COPS Workshop 2008 University of Hohenheim, Stuttgart; 27 to 29 February 2008 IMGI‘s contribution to the COPS 2007 field experiment Simon Hölzl & Alexander.
2012: Hurricane Sandy 125 dead, 60+ billion dollars damage.
AOSC 634 Air Sampling and Analysis Lecture 1 Measurement Theory Performance Characteristics of instruments Nomenclature and static response Copyright Brock.
Department of Meteorology and Geophysics University of Vienna since 1851 since 1365 Mesonet (AWS) Manfred Dorninger, Reinhold Steinacker and Stefan Schneider.
Patrick Tewson University of Washington, Applied Physics Laboratory Local Bayesian Model Averaging for the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
Update on the Regional Modeling System NASA Roses Meeting April 13, 2007.
THE SCAR MET-READER PROJECT Steve Colwell and John Turner British Antarctic Survey.
Update on the Regional Modeling System Cliff Mass, David Ovens, Richard Steed, Mark Albright, Phil Regulski, Jeff Baars, David Carey Northwest Weather.
Operational Quality Control in Helsinki Testbed Mesoscale Atmospheric Network Workshop University of Helsinki, 13 February 2007 Hannu Lahtela & Heikki.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Atmospheric Sciences 370 Observing Systems January 2007.
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
Boundary Layer Parameterization Issues in the MM5 and Other Models. Cliff Mass, Eric Grimit, Jeff Baars, David Ovens, University of Washington.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
Offline Assessment of NESDIS OSCAT data Li Bi 1 Sid Boukabara 2 1 RTI/STAR/JCSDA 2 STAR/JCSDA American Meteorological Society 94 th Annual Meeting 02/06/2014.
Leon Tolstoy, UPRM, UMASS Elena SaltikoffVaisala Internship in Helsinki, Finland January - February 2006.
B M K G Darman Mardanis, SE Stasiun Klimatologi Pondok Betung BMKG.
July 2001Zanjan, Iran1 Atmospheric Profilers Marc Sarazin (European Southern Observatory)
Assessment of Localized Urban Climates and Associations with Air Pollution and Synoptic Weather Patterns Aaron Hardin, MS Candidate, Texas Tech University.
Forecast Skill and Major Forecast Failures over the Northeastern Pacific and Western North America Lynn McMurdie and Cliff Mass University of Washington.
A NEW STANDARD IN METEOROLOGICAL MONITORING SYSTEMS INSTALLED AT THE PERRY NUCLEAR POWER PLANT Jim Holian (SAIC) and Jamie Balstad (First Energy Corp)
A Statistical Comparison of Weather Stations in Carberry, Manitoba, Canada.
The diagram shows weather instruments A and B.
Documentation of surface observation. Classification for siting and performance characteristics Michel Leroy, Météo-France.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Slide 1 Impact of GPS-Based Water Vapor Fields on Mesoscale Model Forecasts (5th Symposium on Integrated Observing Systems, Albuquerque, NM) Jonathan L.
Russian proposals to Scientific program of Hydrometeorological observatory in framework of meteorological and radiation measurements (prepared by A. Makshtas)
SYNOPTIC OBSERVATIONS DECODING & PLOTTING. ENCODING WEATHER INFORMATION In order for people to send information around the world using the WMO discussed.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
DMI-OI analysis in the Arctic DMI-OI processing scheme or Arctic Arctic bias correction method Arctic L4 Reanalysis Biases (AATSR – Pathfinder) Validation.
Quality control of daily data on example of Central European series of air temperature, relative humidity and precipitation P. Štěpánek (1), P. Zahradníček.
Why We Care or Why We Go to Sea.
Data, Data, and More Data Kyle Roskoski. Data This is a summary of what environmental data we have access to for the year (Some data through.
SCCOOS Website Training Meteorological Stations. 1. Go to the Recent Meteorological Stations and Observations webpage at
IHOP_2002 DATA MANAGEMENT UPDATE Steve Williams UCAR/Joint Office for Science Support (JOSS) Boulder, Colorado 2 nd International IHOP_2002 Science Workshop.
IHOP-2002 Data Archive and Development of Composite Data Sets Steven F. Williams, Scot M. Loehrer, Linda E. Cully, Darren R. Gallant, Janine Goldstein,
Meteorological Observatory Lindenberg Results of the Measurement Strategy of the GCOS Reference Upper Air Network (GRUAN) Holger Vömel, GRUAN.
METADATA TO DOCUMENT SURFACE OBSERVATION Michel Leroy, Météo-France.
NWS Calibration Workshop, LMRFC March, 2009 slide - 1 Analysis of Temperature Basic Calibration Workshop March 10-13, 2009 LMRFC.
Unique Quality Control Issues Derek S. Arndt Oklahoma Climatological Survey June 25, 2002.
Summer Tornadoes – NWA 2015 Statistical Severe Convective Risk Assessment Model (SSCRAM) (Hart & Cohen, 2015) SPC Mesoanalysis Data Every hour from
Layered Water Vapor Quick Guide by NASA / SPoRT and CIRA Why is the Layered Water Vapor Product important? Water vapor is essential for creating clouds,
AMSR-E Vapor and Cloud Validation Atmospheric Water Vapor –In Situ Data Radiosondes –Calibration differences between different radiosonde manufactures.
NCAR April 1 st 2003 Mesoscale and Microscale Meteorology Data Assimilation in AMPS Dale Barker S. Rizvi, and M. Duda MMM Division, NCAR
METR Introduction to Synoptic Meteorology Introduction & Surface Observations University of Oklahoma 2004.
Atmospheric Sciences 370 Observing Systems January 2012.
Automated Operational Validation of Meteorological Observations in the Netherlands Wiel Wauben, KNMI, The Netherlands Introduction QA/QC chain Measurement.
Data Editing Strategies Common edits Invalidation vs. contamination? What is a considered a spike? What not to edit! Automatic edits Reasons for edits.
Atmospheric Sciences 370 Observing Systems Winter 2016.
Rooftop Weather Stations As Viable Mesonet Contributors - Validation Experiments Paul Ruscher and Ashlee Hicks 1 Department of Meteorology, The Florida.
Quality Control of Soil Moisture and Temperature For US Climate Reference Network Basic Methodology February 2009 William Collins USCRN.
Atmospheric Sciences 370 Observing Systems January 2010.
Weather Station Model.
Atmospheric Sciences 370 Observing Systems January 2009
Validation of the H01,H02 & H03 products over Turkey
Update on the Northwest Regional Modeling System 2013
Steve Colwell British Antarctic Survey
What is in our head…. Spatial Modeling Performance in Complex Terrain Scott Eichelberger, Vaisala.
Atmospheric Sciences 370 Observing Systems Winter 2018
What temporal averaging period is appropriate for MM5 verification?
Streamflow Simulations of the Terrestrial Arctic Regime
Winter storm forecast at 1-12 h range
Handbook on Meteorological Observations
Range of Variables Currently Received
New York State Mesoscale Weather Network
Presentation transcript:

Automated Quality Control of Meteorological Observations for MM5 Model Verification Jeff Baars Cliff Mass University of Washington, Seattle, Washington

Why Perform Quality Control (QC) on Observations? The integrity and value of all verification activities depends on the quality of the data. Similarly, as we move to mesoscale initialization for the MM5/WRF quality control of the data is critical. Future bias correction will depend on the quality of the observations. Currently, only a simple range check is performed on in-coming obs.

And why do automated QC? Can’t possibly do manual QC on all of the obs we ingest. No automated QC program will be perfect, and some bad values may slip through. Some non-bad values may get flagged also.

Description of the Obs Many unknowns about individual networks, and generally there’s a lack of documentation for them. Networks vary in quality : Instrumentation varies, as does accuracy of measurements. Siting varies between networks, including instrument height, surroundings, etc. Calibration standards vary between networks– how many networks perform annual calibration?

The Flags No data is thrown out– it is only flagged. Each test has a flag, which can be “pass,” “suspect,” “warning,” and “failure.” Only a range failure can cause a “failure” flag.

The Four QC Tests  Modeled after Oklahoma Mesonet automated QC procedures (Shafer et al 2000, J of Oceanic Tech).  With some differences, due to differences between the Northwest and OK. (Like lots of topography!)  Performed on 6 variables: temperature, relative humidity, wind speed and direction, sea level pressure, and 6- and 24- hr precipitation.  Range check-- simplest of them all; just a sanity check on the value, similar to what we’re already doing.  Step check-- looks for unreasonably large jumps between hour time steps in the data.  Persistence check-- looks for a “flat-lined” time series.  Spatial check-- checks value in question versus those surrounding it. The most tricky one to get right!

Step Test– Flagged Threshold between successive obs Variable“Suspect”“Warning” Temperature (K)1015 Relative Humidity (%)6080 Sea Level Pressure (mb)1015 Wind Speed (m/s)4050

Persistence Test Looks at 24 hours (18 hours for temperature) of consecutive data for a variable. Calculates standard deviation over the 24 hours, and if it’s below a set threshold, the entire day is flagged. Also checks the largest delta between successive obs and if it is below a set threshold, the entire day is flagged.

Spatial Test Tested multiple spatial tests; settled on the simplest. For each station, looks at all stations within 100-km horizontal distance and within an elevation band (varies by variable). Minimum of 5 stations required for test to be performed. If no station has a value reasonably close to value in question, value is flagged. If not enough stations are found in 100-km radius, test is performed again for a 150-km radius, but with constraints eased.

varn flaggedpercent flagged (n/n_all_fla gged) dir pcp pcp rh slp spd temp

testn flaggedpercent flagged (n/n_all_flag ged) persistence range spatial step

networkn flaggedpercent flagged (n/n_all_flagged) AV AW CC CM CU DT EC GS HN RW SA SN UW

Future Work Further testing. Implement QC procedures.

Extra’s

networktestn flaggedpercent flagged (n/n_flagged_this_ network) AWpersistence AWrange20.00 AWspatial AWstep RWpersistence RWrange RWspatial RWstep SApersistence SArange SAspatial SAstep HNpersistence HNrange HNspatial HNstep ECpersistence ECrange ECspatial ECstep DTpersistence DTrange DTspatial DTstep710.24