Download presentation
Presentation is loading. Please wait.
1
Objective Analyses & Data Representativeness
John Horel Department of Meteorology University of Utah
2
Acknowledgements References
Dan Tyndall & Dave Whiteman (Univ. of Utah) Dave Myrick (WRH/SSD) References Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge Hunt, E., J. Basara, and C. Morgan, 2007: Significant inversions and rapid in situ cooling at a well-sited Oklahoma mesonet station, J. Applied Meteor. and Clim., 46, Lockhart, T., 2003: Challenges of Measurements. Handbook of Weather, Climate and Water. Wiley & Sons Myrick, D., and J. Horel, 2006: Verification over the Western United States of surface temperature forecasts from the National Digital Forecast Database. Wea. Forecasting, 21, Whiteman, C. D. and coauthors, 2008: METCRAX 2006 – Meteorological experiments in Arizona's meteor crater. Bull. Amer. Meteor. Soc. In Press.
3
Discussion Points Why are analyses needed?
Application driven: data assimilation for NWP (forecasting) vs. objective analysis (specifying the present, or past) What are the goals of the analysis? Define microclimates? Requires attention to details of geospatial information (e.g., limit terrain smoothing) Resolve mesoscale/synoptic-scale weather features? Requires good prediction from previous analysis How is analysis quality determined? What is truth? Why not rely on observations alone to verify model guidance?
4
How Well Can We Observe, Analyze, and Forecast Conditions Near the Surface?
Forecasters clearly recognize large variations in surface temperature, wind, moisture, precipitation exist over short distances: in regions of complex terrain when little lateral/vertical mixing due to convective precipitation To what extent can you rely on surface observations to define conditions within 2.5 x 2.5 or 5 x 5 km2 grid box? Do we have enough observations to do so? Need to recognize errors inherent in observations and use that error information for analyses, forecast preparation, & verification
5
One Motivation: Viewing the atmosphere in terms of grids vs. points
ASOS station Forecast High = 10oC Actual High = 12oC Error = -2oC Too Cold What about away from ASOS stations? Need an analysis of observations
6
Objective Analysis A map of a meteorological field Relies on:
observations background field Used for: Initialization for a model forecast Situational awareness Verification grid
7
Objective Analyses in U.S.
Hand drawn analysis Model initialization panel LAPS MSAS MatchObsAll NCEP Reanalysis NARR RTMA AOR (future)
8
ABC’s of Objective Analysis
In the simplest of terms: Analysis Value = Background Value + Observation Correction
9
Analyses Analysis value = Background value + observation Correction
An analysis is more than spatial interpolation A good analysis requires: a good background field supplied by a model forecast observations with sufficient density to resolve critical weather and climate features information on the error characteristics of the observations and background field appropriate techniques to translate background values to observations (termed “forward operators”) Significant differences between analyses intended to generate forecasts vs. analyses intended to describe current (or past) conditions
10
Background Values Obtained from an analysis:
Climatology An objective analysis at a coarser resolution Short term forecast Analysis from previous hour Most objective analysis systems account for background errors but approaches vary
11
Do we have enough observations? ASOS Reports
12
When the weather gets interesting, mesoscale observations are critical
13
Some of the National & Regional Mesonet Data Collection Efforts
14
Are These Enough?
15
Observations Observations are not perfect…
Gross errors Local siting errors Instrument errors Representativeness errors Most objective analysis schemes take into account that observations contain errors but approaches vary
16
Incorporating Errors Basic example: sb = background error variance
so = observation error variance So – the analysis won’t always match observations
17
Objective Analysis Approaches
Successive Corrections Optimal Interpolation Variational (3DVar, 4DVar) Kalman or Ensemble Filters Kalnay (2003) Chapter 5 – good overview of different schemes simple complex
18
What are appropriate analysis gridpoint values?
Inequitable distribution of observations Differences between the elevations of the analysis gridpoints and the observations ? x
19
Potential for Confusion
Analysis systems like MatchObsAll suggest that the analysis should exactly match every observation Objective analysis values usually don’t match surface observations Analysis schemes are intended to develop the “best fit” to the differences between the observations and the background taking into account observational and background errors when evaluated over a large sample of cases
20
Observations vs. Truth? A Few Good Men
Truth is unknown and depends on application: “expected value for 5 x 5 km2 area” Assumption: average of many unbiased observations should be same as expected value of truth However, accurate observations may be biased or unrepresentative due to siting or other factors
21
Goals for Objective Analysis
Minimize departure of analysis values at grid squares (e.g., 5x5 km2) from the corresponding “truth” when averaged over the entire grid and over a large sample of analyses Truth is unknown but statistics about truth assumed
22
Recognizing Observational Uncertainty
Observations vs. the truth: how well do we know the current state of the atmosphere? All that is labeled data Is NOT gold! Lockhart (2003) Effective use of analyses can expand utility of observations
23
Getting a Handle on Siting Issues & Observational Errors
Metadata errors Instrument errors (exposure, maintenance, sampling) Local siting errors (e.g., artificial heat source, overhanging vegetation, observation at variable height above ground due to snowpack) “Errors of representativeness” – correct observations that are capturing phenomena that are not representative of surroundings on broader scale (e.g., observations in vegetation-free valleys and basins surrounded by forested mountains)
24
Are All Observations Equally Good?
Why was the sensor installed? Observing needs and sampling strategies vary (air quality, fire weather, road weather) Station siting results from pragmatic tradeoffs: power, communication, obstacles, access Use common sense and experience Wind sensor in the base of a mountain pass will likely blow from only two directions Errors depend upon conditions (e.g., temperature spikes common with calm winds) Pay attention to metadata Monitor quality control information Basic consistency checks Comparison to other stations
25
Representativeness Errors
Observations may be accurate… But the phenomena they are measuring may not be resolvable on the scale of the analysis This is interpreted as an error of the observation not the analysis Common problem over complex terrain Also common when strong inversions Can happen anywhere Sub-5km terrain variability (m) (Myrick and Horel, WAF 2006)
26
Representative errors to be expected in mountains Alta Ski Area
~5 km ~2.5 km ~2.5 km COMAP – April 16, 2008 ~5 km
27
Alta Ski Area Looking up the mountain
Looking up Little Cottonwood Canyon
28
Alta Ski Area 2100 UTC 17 July 2007 18oC 22oC 25oC
COMAP – April 16, 2008
29
Rush Valley, UT Example 1800 UTC 13 January 2004
Tooele Valley Is either observation representative of the conditions across the box? +9oC -1oC Rush Valley COMAP – April 16, 2008
30
Rush Valley, UT Example 1800 UTC 13 January 2004
+9oC +9oC -1oC -1oC Looking north from Rush Valley Looking across Stockton Bar
31
METCRAX Field Experiment
Whiteman et al. (2008) October 2006 Goal: study evolution of the nocturnal boundary layer and slope flows 11/11/2018
32
Examine observational error using data collected during METCRAX Field Program
Lines of temperature data loggers in the Arizona Meteor Crater High temporal (5 min) and spatial (~ m vertical) resolution ~ observations Potential to distinguish between: measurement (instrumental) error representativeness error: errors arising from subgrid scale variations Assess dependence of observational error on location within grid square
33
56 Temperature Sensors within 2.5x2.5 km2 grid box
34
So, let’s make some assumptions about “truth”…
(1) Truth is one of the 56 observations Which one? Bottom of crater? Sidewall? Outside the crater? (2) Truth is the average of the observations outside the crater (3) Truth is the average of all the 56 observations What are the observational errors as a function of these assumptions about truth?
35
Truth = 1 point near crater bottom
RMSE=1.9oC
36
Truth = 1 point outside crater
RMSE=1.3oC
37
Truth = average outside crater
RMSE=1.1oC
38
Truth = average all RMSE=1.1oC
39
Key Points METCRAX is highly idealized example:
Identical & calibrated equipment used Measurement error usually higher due to variations in equipment, lack of calibration, instrument bias “Only” 150 m variation in elevation, no vegetation, no buildings… Much larger terrain, vegetation, and land use gradients in nearly all 2.5 x 2.5 km2 grid boxes Representativeness errors can be large and vary as a function of synoptic regime Temperature variations within grid box tend to be more consistent in space & time However, extremes are not (max/min’s) Wind, precipitation variations within grid boxes larger
40
Hypothetical Examples Which observations would you trust to provide useful information for an analysis?
41
5 km early morning high pressure light winds
9000 10000 8000 29 7000 6000 early morning high pressure light winds 33 10 9000 6000 8000 25 7000 8000 The observation is probably representative of conditions across most of the grid box 5 km
42
5 km early morning high pressure light winds
9000 10000 8000 29 7000 early morning high pressure light winds 6000 33 10 7000 9000 8000 25 8000 It is likely that the ob is only representative of a small sliver of the grid box 5 km
43
5 km early morning high pressure light winds
8000 29 early morning high pressure light winds 33 10 25 In this case, the ob would fail a buddy check as there is little terrain variation in the area. 7500 5 km
44
5 km early morning pre-frontal well mixed
9000 10000 8000 29 7000 early morning pre-frontal well mixed 6000 33 10 7000 9000 The synoptic situation is key in this example. The pre-frontal gusty winds would probably scour out the inversion. The ob is likely experiencing an instrument error. 8000 25 8000 5 km
45
Back to the Real World… Which observations would you trust to provide useful information for an analysis?
49
Summary Need for balance… Models or observations cannot independently define weather and weather processes effectively Spatial & Temporal Continuity Specificity Analysis Background supplied by NWP Model Observations
50
Recognition of Sources of Errors
NWP Model Errors Inaccurate ICs Incomplete Physics Smooth terrain Analysis Errors
51
Recognition of Sources of Errors
Observational Errors Instrumental Representative Analysis Errors
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.