Download presentation
Presentation is loading. Please wait.
Published byつねたけ かいて Modified over 5 years ago
1
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA Rob Gilliam, NOAA, RTP, NC, USA Lara Reynolds and Allan Huffman, CSC, RTP, NC, USA 6th Annual CMAS Conference, Chapel Hill, NC, October 1-3, 2007
2
Meteorological Model Evaluation Principles
Evaluation goal(s) Move toward an understanding of how bias/error of the meteorological data impact the resultant AQ modeling Move away from an “as is” acceptance of met modeling data Assess model performance at the scales over which the meteorological data will ultimately be used National/Regional: CMAQ or other grid modeling analyses Local: AERMOD or other plume modeling analyses Two specific objectives within broader goals: Determine if the meteorological model output fields represent a reasonable approximation of the actual meteorology that occurred. (Operational) Identify and quantify the existing biases and errors in the meteorological predictions in order to allow for a downstream assessment of how AQ modeling results are affected by issues. (Phenomenological)
3
EPA 2002 MM5 Model Configuration
36 & 12 km modeling 36 km: v using land-surface modifications that were added in v3.6.3 12 km: MM5 v3.7.2. Both domains contained 34 vertical layers; with a ~38 m surface layer and a 100 mb top. Both sets of model runs were conducted in 5.5 day segments with 12 hours of overlap for spin-up purposes. Analysis nudging was utilized outside of the PBL for temperatures and water vapor mixing ratios, & in all locations for wind components, using relatively weak nudging coefficients. The Atmospheric Model Evaluation Tool (AMET) was used to conduct the evaluation analyses as described in by Gilliam et al (2005).
4
Operational evaluation – national/regional 12km Eastern US statistics
5
Operational evaluation - precipitation 12km Eastern US statistics
Best Case May 2002 Worst Case Oct 2002 Note: scales are different between months
6
Operational evaluation – sample local 12 km results in Birmingham AL & Detroit MI
Temperature Water Vapor Mixing Ratio Wind Speed Wind Direction
7
Operational evaluation – sample local 12 km results in Birmingham AL & Detroit MI
Birmingham AL (Q3: Jul-Sep) Detroit MI (Q1: Jan - Mar)
8
Phenomenological evaluation – national/regional Assessment of cold bias by time of day
Winter Summer Observations: Winter time cold bias is strongest at night Summer model overnight temperatures decrease at a slower rate than observed. As nocturnal layer is mixed, Slight warm bias rapidly gives way to small cool bias.
9
Phenomenological evaluation – national/regional Seasonal averages of performance aloft: key sites
Spring Fall Observations: Generally, average potential temperatures, RH, and wind vectors are well-captured in the PBL. In general, differences are greatest in the lowest 1km.
10
Meteorological Model Evaluation: Conclusions (1)
Both sets of 2002 MM5 meteorological model output fields (36 & 12km) represent a reasonable approximation of the actual meteorology that occurred during the modeling period at a national level. It is expected that these sets of input meteorological data are appropriate for use in regional and national air quality modeling simulations. For local scale analyses using these data, it is recommended that a detailed, area-specific evaluation be completed before using in a local application. The most troublesome aspect of meteorological model performance is the cold bias in surface temperatures during the winter of 2002, especially in January. Across the two MM5 simulations, the January cold bias typically averaged around 2-3 deg C. The effect is largest overnight which results in a tendency to overestimate stability in the lowest layers. These artifacts from the meteorological modeling have had a significant impact on the air quality results.
11
Meteorological Model Evaluation: Conclusions (2)
This summary presentation represents only a small subset of the actual evaluation analyses completed. The 2002 MM5 model evaluation is not complete. We would like to do more analysis on cloud coverage, planetary boundary layer heights, as well as try to assess model performance as a function of meteorological regime.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.