Objective Analyses & Data Representativeness

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

Regional Modelling Prepared by C. Tubbs, P. Davies, Met Office UK Revised, delivered by P. Chen, WMO Secretariat SWFDP-Eastern Africa Training Workshop.
Outline Siting equipment Metadata Communicating with remote sites.
The Use of High Resolution Mesoscale Model Fields with the CALPUFF Dispersion Modelling System in Prince George BC Bryan McEwen Master’s project
For the Lesson: Eta Characteristics, Biases, and Usage December 1998 ETA-32 MODEL CHARACTERISTICS.
ROMAN: Real-Time Observation Monitor and Analysis Network John Horel, Mike Splitt, Judy Pechmann, Brian Olsen NOAA Cooperative Institute for Regional Prediction.
Challenges in data assimilation for ‘high resolution’ numerical weather prediction (NWP) Today’s observations + uncertainty information Today’s forecast.
ASSIMILATION of RADAR DATA at CONVECTIVE SCALES with the EnKF: PERFECT-MODEL EXPERIMENTS USING WRF / DART Altuğ Aksoy National Center for Atmospheric Research.
Daniel P. Tyndall and John D. Horel Department of Atmospheric Sciences, University of Utah Salt Lake City, Utah.
Operational Quality Control in Helsinki Testbed Mesoscale Atmospheric Network Workshop University of Helsinki, 13 February 2007 Hannu Lahtela & Heikki.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Verification of Numerical Weather Prediction systems employed by the Australian Bureau of Meteorology over East Antarctica during the summer season.
Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD Istvan Szunyogh representing the UMD “Chaos-Weather” Group Ensemble.
Consortium Meeting June 3, Thanks Mike! Hit Rates.
Understanding the Weather Leading to Poor Winter Air Quality Erik Crosman 1, John Horel 1, Chris Foster 1, Lance Avey 2 1 University of Utah Department.
MDSS Challenges, Research, and Managing User Expectations - Weather Issues - Bill Mahoney & Kevin Petty National Center for Atmospheric Research (NCAR)
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
SC.912.E.7.5 Predict future weather conditions based on present observations and conceptual models and recognize limitations and uncertainties of such.
Regional Climate Modeling in the Source Region of Yellow River with complex topography using the RegCM3: Model validation Pinhong Hui, Jianping Tang School.
Real Time Mesoscale Analysis John Horel Department of Meteorology University of Utah RTMA Temperature 1500 UTC 14 March 2008.
Diurnal Water and Energy Cycles over the Continental United States Alex Ruane John Roads Scripps Institution of Oceanography / UCSD February 27 th, 2006.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
3DVAR Retrieval of 3D Moisture Field from Slant- path Water Vapor Observations of a High-resolution Hypothetical GPS Network Haixia Liu and Ming Xue Center.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Forecasting Streamflow with the UW Hydrometeorological Forecast System Ed Maurer Department of Atmospheric Sciences, University of Washington Pacific Northwest.
A Comparison of the Northern American Regional Reanalysis (NARR) to an Ensemble of Analyses Including CFSR Wesley Ebisuzaki 1, Fedor Mesinger 2, Li Zhang.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
VERIFICATION OF NDFD GRIDDED FORECASTS IN THE WESTERN UNITED STATES John Horel 1, David Myrick 1, Bradley Colman 2, Mark Jackson 3 1 NOAA Cooperative Institute.
Slide 1 Impact of GPS-Based Water Vapor Fields on Mesoscale Model Forecasts (5th Symposium on Integrated Observing Systems, Albuquerque, NM) Jonathan L.
Part III: ROMAN and MesoWest: resources for observing surface weather  MesoWest and ROMAN are software that require ongoing maintenance and development.
VERIFICATION OF NDFD GRIDDED FORECASTS USING ADAS John Horel 1, David Myrick 1, Bradley Colman 2, Mark Jackson 3 1 NOAA Cooperative Institute for Regional.
Diurnal Water and Energy Cycles over the Continental United States Alex Ruane John Roads Scripps Institution of Oceanography / UCSD April 28 th, 2006 This.
Data assimilation, short-term forecast, and forecasting error
An air quality information system for cities with complex terrain based on high resolution NWP Viel Ødegaard, r&d department.
Part II  Access to Surface Weather Conditions:  MesoWest & ROMAN  Surface Data Assimilation:  ADAS.
P1.7 The Real-Time Mesoscale Analysis (RTMA) An operational objective surface analysis for the continental United States at 5-km resolution developed by.
A Numerical Study of Early Summer Regional Climate and Weather. Zhang, D.-L., W.-Z. Zheng, and Y.-K. Xue, 2003: A Numerical Study of Early Summer Regional.
VERIFICATION OF NDFD GRIDDED FORECASTS USING ADAS John Horel 1, David Myrick 1, Bradley Colman 2, Mark Jackson 3 1 NOAA Cooperative Institute for Regional.
NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division.
BOT / GEOG / GEOL 4111 / Field data collection Visiting and characterizing representative sites Used for classification (training data), information.
Wind Gust Analysis in RTMA Yanqiu Zhu, Geoff DiMego, John Derber, Manuel Pondeca, Geoff Manikin, Russ Treadon, Dave Parrish, Jim Purser Environmental Modeling.
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
MoPED temperature, pressure, and relative humidity observations at sub- minute intervals are accessed and bundled at the University of Utah into 5 minute.
NOAA Northeast Regional Climate Center Dr. Lee Tryhorn NOAA Climate Literacy Workshop April 2010 NOAA Northeast Regional Climate.
June 20, 2005Workshop on Chemical data assimilation and data needs Data Assimilation Methods Experience from operational meteorological assimilation John.
Variations in the details of cold frontal passages as revealed by mesonetwork data Chuck Doswell Doswell Scientific Consulting - Norman, OK 14 th Cyclone.
Space and Time Mesoscale Analysis System — Theory and Application 2007
26. Classification Accuracy Assessment
Doswell Scientific Consulting - Norman, OK
SO441 Synoptic Meteorology
Numerical Weather Forecast Model (governing equations)
Moving from Empirical Estimation of Humidity to Observation: A Spatial and Temporal Evaluation of MTCLIM Assumptions Using Regional Networks Ruben Behnke.
Rapid Update Cycle-RUC
Overview of Downscaling
Grid Point Models Surface Data.
MET3220C & MET6480 Computational Statistics
Meteorological Instrumentation and Observations
Case study of an urban heat island in London, UK: Comparison between observations and a high resolution numerical weather prediction model Siân Lane, Janet.
  Robert Gibson1, Douglas Drob2 and David Norris1 1BBN Technologies
Radar/Surface Quantitative Precipitation Estimation
Reinhold Steinacker Department of Meteorology and Geophysics
Validation of Satellite-derived Lake Surface Temperatures
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Rapid Update Cycle-RUC Rapid Refresh-RR High Resolution Rapid Refresh-HRRR RTMA.
FSOI adapted for used with 4D-EnVar
Seasonal Frequency of Fronts and Surface Baroclinic Zones in the Great Lakes Region Melissa Payer Chemical, Earth, Atmospheric, and Physical Sciences Department.
INFLUX: Comparisons of modeled and observed surface energy dynamics over varying urban landscapes in Indianapolis, IN Daniel P. Sarmiento, Kenneth Davis,
Rita Roberts and Jim Wilson National Center for Atmospheric Research
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
P2.5 Sensitivity of Surface Air Temperature Analyses to Background and Observation Errors Daniel Tyndall and John Horel Department.
Presentation transcript:

Objective Analyses & Data Representativeness John Horel Department of Meteorology University of Utah john.horel@utah.edu

Acknowledgements References Dan Tyndall & Dave Whiteman (Univ. of Utah) Dave Myrick (WRH/SSD) References Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge Hunt, E., J. Basara, and C. Morgan, 2007: Significant inversions and rapid in situ cooling at a well-sited Oklahoma mesonet station, J. Applied Meteor. and Clim., 46, 353-367. Lockhart, T., 2003: Challenges of Measurements. Handbook of Weather, Climate and Water. Wiley & Sons. 695-710. Myrick, D., and J. Horel, 2006: Verification over the Western United States of surface temperature forecasts from the National Digital Forecast Database. Wea. Forecasting, 21, 869-892. Whiteman, C. D. and coauthors, 2008: METCRAX 2006 – Meteorological experiments in Arizona's meteor crater. Bull. Amer. Meteor. Soc. In Press. 

Discussion Points Why are analyses needed? Application driven: data assimilation for NWP (forecasting) vs. objective analysis (specifying the present, or past) What are the goals of the analysis? Define microclimates? Requires attention to details of geospatial information (e.g., limit terrain smoothing) Resolve mesoscale/synoptic-scale weather features? Requires good prediction from previous analysis How is analysis quality determined? What is truth? Why not rely on observations alone to verify model guidance?

How Well Can We Observe, Analyze, and Forecast Conditions Near the Surface? Forecasters clearly recognize large variations in surface temperature, wind, moisture, precipitation exist over short distances: in regions of complex terrain when little lateral/vertical mixing due to convective precipitation To what extent can you rely on surface observations to define conditions within 2.5 x 2.5 or 5 x 5 km2 grid box? Do we have enough observations to do so? Need to recognize errors inherent in observations and use that error information for analyses, forecast preparation, & verification

One Motivation: Viewing the atmosphere in terms of grids vs. points ASOS station Forecast High = 10oC Actual High = 12oC Error = -2oC Too Cold What about away from ASOS stations? Need an analysis of observations

Objective Analysis A map of a meteorological field Relies on: observations background field Used for: Initialization for a model forecast Situational awareness Verification grid

Objective Analyses in U.S. Hand drawn analysis Model initialization panel LAPS MSAS MatchObsAll NCEP Reanalysis NARR RTMA AOR (future)

ABC’s of Objective Analysis In the simplest of terms: Analysis Value = Background Value + Observation Correction

Analyses Analysis value = Background value + observation Correction An analysis is more than spatial interpolation A good analysis requires: a good background field supplied by a model forecast observations with sufficient density to resolve critical weather and climate features information on the error characteristics of the observations and background field appropriate techniques to translate background values to observations (termed “forward operators”) Significant differences between analyses intended to generate forecasts vs. analyses intended to describe current (or past) conditions

Background Values Obtained from an analysis: Climatology An objective analysis at a coarser resolution Short term forecast Analysis from previous hour Most objective analysis systems account for background errors but approaches vary

Do we have enough observations? ASOS Reports

When the weather gets interesting, mesoscale observations are critical

Some of the National & Regional Mesonet Data Collection Efforts

Are These Enough?

Observations Observations are not perfect… Gross errors Local siting errors Instrument errors Representativeness errors Most objective analysis schemes take into account that observations contain errors but approaches vary

Incorporating Errors Basic example: sb = background error variance so = observation error variance So – the analysis won’t always match observations

Objective Analysis Approaches Successive Corrections Optimal Interpolation Variational (3DVar, 4DVar) Kalman or Ensemble Filters Kalnay (2003) Chapter 5 – good overview of different schemes simple complex

What are appropriate analysis gridpoint values? Inequitable distribution of observations Differences between the elevations of the analysis gridpoints and the observations ? x

Potential for Confusion Analysis systems like MatchObsAll suggest that the analysis should exactly match every observation Objective analysis values usually don’t match surface observations Analysis schemes are intended to develop the “best fit” to the differences between the observations and the background taking into account observational and background errors when evaluated over a large sample of cases

Observations vs. Truth? A Few Good Men Truth is unknown and depends on application: “expected value for 5 x 5 km2 area” Assumption: average of many unbiased observations should be same as expected value of truth However, accurate observations may be biased or unrepresentative due to siting or other factors

Goals for Objective Analysis Minimize departure of analysis values at grid squares (e.g., 5x5 km2) from the corresponding “truth” when averaged over the entire grid and over a large sample of analyses Truth is unknown but statistics about truth assumed

Recognizing Observational Uncertainty Observations vs. the truth: how well do we know the current state of the atmosphere? All that is labeled data Is NOT gold! Lockhart (2003) Effective use of analyses can expand utility of observations

Getting a Handle on Siting Issues & Observational Errors Metadata errors Instrument errors (exposure, maintenance, sampling) Local siting errors (e.g., artificial heat source, overhanging vegetation, observation at variable height above ground due to snowpack) “Errors of representativeness” – correct observations that are capturing phenomena that are not representative of surroundings on broader scale (e.g., observations in vegetation-free valleys and basins surrounded by forested mountains)

Are All Observations Equally Good? Why was the sensor installed? Observing needs and sampling strategies vary (air quality, fire weather, road weather) Station siting results from pragmatic tradeoffs: power, communication, obstacles, access Use common sense and experience Wind sensor in the base of a mountain pass will likely blow from only two directions Errors depend upon conditions (e.g., temperature spikes common with calm winds) Pay attention to metadata Monitor quality control information Basic consistency checks Comparison to other stations

Representativeness Errors Observations may be accurate… But the phenomena they are measuring may not be resolvable on the scale of the analysis This is interpreted as an error of the observation not the analysis Common problem over complex terrain Also common when strong inversions Can happen anywhere Sub-5km terrain variability (m) (Myrick and Horel, WAF 2006)

Representative errors to be expected in mountains Alta Ski Area ~5 km ~2.5 km ~2.5 km COMAP – April 16, 2008 ~5 km

Alta Ski Area Looking up the mountain Looking up Little Cottonwood Canyon

Alta Ski Area 2100 UTC 17 July 2007 18oC 22oC 25oC COMAP – April 16, 2008

Rush Valley, UT Example 1800 UTC 13 January 2004 Tooele Valley Is either observation representative of the conditions across the box? +9oC -1oC Rush Valley COMAP – April 16, 2008 www.topozone.com

Rush Valley, UT Example 1800 UTC 13 January 2004 +9oC +9oC -1oC -1oC Looking north from Rush Valley Looking across Stockton Bar

METCRAX Field Experiment Whiteman et al. (2008) October 2006 Goal: study evolution of the nocturnal boundary layer and slope flows 11/11/2018

Examine observational error using data collected during METCRAX Field Program Lines of temperature data loggers in the Arizona Meteor Crater High temporal (5 min) and spatial (~ 50-100 m vertical) resolution ~5000-30000 observations Potential to distinguish between: measurement (instrumental) error representativeness error: errors arising from subgrid scale variations Assess dependence of observational error on location within grid square

56 Temperature Sensors within 2.5x2.5 km2 grid box

So, let’s make some assumptions about “truth”… (1) Truth is one of the 56 observations Which one? Bottom of crater? Sidewall? Outside the crater? (2) Truth is the average of the observations outside the crater (3) Truth is the average of all the 56 observations What are the observational errors as a function of these assumptions about truth?

Truth = 1 point near crater bottom RMSE=1.9oC

Truth = 1 point outside crater RMSE=1.3oC

Truth = average outside crater RMSE=1.1oC

Truth = average all RMSE=1.1oC

Key Points METCRAX is highly idealized example: Identical & calibrated equipment used Measurement error usually higher due to variations in equipment, lack of calibration, instrument bias “Only” 150 m variation in elevation, no vegetation, no buildings… Much larger terrain, vegetation, and land use gradients in nearly all 2.5 x 2.5 km2 grid boxes Representativeness errors can be large and vary as a function of synoptic regime Temperature variations within grid box tend to be more consistent in space & time However, extremes are not (max/min’s) Wind, precipitation variations within grid boxes larger

Hypothetical Examples Which observations would you trust to provide useful information for an analysis?

5 km early morning high pressure light winds 9000 10000 8000 29 7000 6000 early morning high pressure light winds 33 10 9000 6000 8000 25 7000 8000 The observation is probably representative of conditions across most of the grid box 5 km

5 km early morning high pressure light winds 9000 10000 8000 29 7000 early morning high pressure light winds 6000 33 10 7000 9000 8000 25 8000 It is likely that the ob is only representative of a small sliver of the grid box 5 km

5 km early morning high pressure light winds 8000 29 early morning high pressure light winds 33 10 25 In this case, the ob would fail a buddy check as there is little terrain variation in the area. 7500 5 km

5 km early morning pre-frontal well mixed 9000 10000 8000 29 7000 early morning pre-frontal well mixed 6000 33 10 7000 9000 The synoptic situation is key in this example. The pre-frontal gusty winds would probably scour out the inversion. The ob is likely experiencing an instrument error. 8000 25 8000 5 km

Back to the Real World… Which observations would you trust to provide useful information for an analysis?

Summary Need for balance… Models or observations cannot independently define weather and weather processes effectively Spatial & Temporal Continuity Specificity Analysis Background supplied by NWP Model Observations

Recognition of Sources of Errors NWP Model Errors Inaccurate ICs Incomplete Physics Smooth terrain Analysis Errors

Recognition of Sources of Errors Observational Errors Instrumental Representative Analysis Errors