Download presentation
Presentation is loading. Please wait.
Published byKevin Taylor Modified over 9 years ago
1
REGIONAL AND LOCAL-SCALE EVALUATION OF MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS 1 Jason Brewer, 2 * Pat Dolwick, and 3 * Rob Gilliam 1 Department of Marine, Earth, and Atmospheric Sciences, North Carolina State University, Raleigh, North Carolina 2 Air Quality Modeling Group, Office of Air Quality Planning and Standards (OAQPS), USEPA, Research Triangle Park, North Carolina 3 Atmospheric Modeling Division, National Exposure Research Laboratory (NERL), USEPA, Research Triangle Park, North Carolina *On assignment from the Air Resources Laboratory, NOAA Prognostic meteorological models are often used in a retrospective mode to provide inputs to air quality models that are used for environmental planning. These inputs govern the advection, diffusion, chemical transformation, and eventual deposition of pollutants within regional air quality models such as CMAQ 1 (Community Multi-scale Air Quality modeling system) and are being investigated for use in local-scale assessments such as AERMOD 2. The air quality models have consistently been subjected to a rigorous performance assessment, but in many cases the meteorological inputs to these models are accepted as is, even though this component of the modeling arguably contains more uncertainty that could significantly affect the results of the analysis 3. Before initiating the air quality simulations, it is important to identify the biases and errors associated with the meteorological modeling. The goal of the meteorological evaluation 4 is to move toward an understanding of how the bias and error of the meteorological input data impact the resultant AQ modeling. Typically, there are two specific objectives: 1) determine if the meteorological model output fields represent a reasonable approximation of the actual meteorology that occurred during the modeling period (i.e., the “operational” evaluation), and 2) identify and quantify how the existing biases and errors in the meteorological predictions may affect the air quality modeling results (i.e., the “phenomenological” evaluation). This analysis looks at the performance of the Penn State University / National Center for Atmospheric Research mesoscale model 5 known as MM5 for two separate years (2001 and 2002) at two separate model resolutions (36 and 12km). The model evaluation is summarized for the entire domain, individual subregions within the domain, and certain individual sites to assess the suitability of the data to drive regional-scale, photochemical models (e.g., CMAQ) versus local-scale, dispersion models (e.g., AERMOD). The operational evaluation includes statistical comparisons of model/observed pairs (e.g., bias, index of agreement, root mean square errors, etc.) for multiple meteorological parameters (e.g., temperature, water vapor mixing ratio, winds, etc.). The phenomenological evaluation is based on existing air quality conceptual models and assesses performance for varied phenomena such as trajectories, low-level jets, frontal passages, and air mass residence time and uses a different universe of statistics such as false alarm rates and probabilities of detection. This poster is only able to show a small subset of all the completed analyses on which the conclusions are based. 1. Introduction / Background 3. Sample Regional-Scale Operational Evaluation (2002 12km MM5) 2. 2001 and 2002 MM5 Configuration Model version: 2001 (36): 3.6.1 2001 (12): 3.6.3 (w/ minor fixes to KF2 & Reisner 2) 2002 (36): 3.6.3 2002 (12): 3.7.2 Domain Size: 2001/2002 (36): 165 * 129 * 34 2001/2002 (12): 290 * 251 * 34 Major Physics Options: Radiation: RRTM Long-wave Radiation Cumulus Parametrization: Kain-Fritsch 1 (2001/36 only), Kain-Fritsch 2 Microphysics: Reisner 2 (2001), Reisner 1 (2002) Land Surface Model / PBL Scheme: Pleim-Xiu / Asymmetric Convective Method (ACM1) Analysis Nudging (12km): winds (aloft): 1.0E-4; winds (surface): 1.0E-4, temperature (aloft): 1.0E-4; temperature (surface): N/A moisture (aloft): 1.0E-5; moisture (surface): N/A Run Durations: 5.5 day individual runs, w/in 7 two-month simulations Evaluation software: Atmospheric Model Evaluation Tool (AMET) 7. Conclusions All four sets of meteorological model output fields represent a reasonable approximation of the actual meteorology that occurred during the modeling period. (See panel 3.) Qualitative comparisons of synoptic patterns (not shown) indicate the model captures large scale features such as high pressure domes and upper-level troughs. Certainly, the most troublesome aspect of meteorological model performance is the surface temperature “cold bias” during the winter, especially January. Across the four MM5 simulations, the January cold bias typically averaged around 2-3 deg C. The effect is largest overnight (panel 5d). The resultant tendency is to overestimate stability in the lowest layers. This could have a significant impact on the air quality results as pollutants emitted at the surface may not be properly mixed. Generally, bias/error does not appear to be a function of region. However, individual model / observation comparisons in space/time can show large deviations. Caution should be exercised when using these meteorological data for air quality modeling in the Rocky Mountain region where the model errors/biases are much larger than in other regions analyzed. (See panel 3.) Care will have to be exercised when using these MM5 results on the local scale. When averaged regionally, there is little to no bias in wind directions, but as shown in panel 4, local variances can be considerably higher. Users of Gaussian plume based models should scrutinize the MM5 performance closely over their areas of interest. The model is generally unbiased for precipitation at large scale (panel 5a), though the 2001 results appear to better match the observations than 2002, perhaps indicating that use of the Reisner 2 microphysics scheme was justified. The “key site” analysis shown in panel 6 looked at MM5 performance over a specific ozone event in the Ohio Valley. These evaluations can be time-consuming but are important for identifying appropriate modeling episodes. This evaluation is not entirely complete. We would like to do more analysis on cloud coverage, PBL heights, as well as model performance as a function of meteorological regime (clusters), and wind field comparisons against trajectory models. Note: All four of these data sets are available. If you are interested in acquiring the data, please e-mail Pat Dolwick (dolwick.pat@epa.gov). The transfer process requires the user to provide USB drives. Acknowledgements / References The MM5 runs evaluated as part of this study were completed by Alpine Geophysics (2001 simulations) and Computer Science Corporation (2002 simulations). The authors would like to thank Dennis McNally, Lara Reynolds, and Allan Huffman for the effort they put into completing the meteorological modeling. 1 Byun, D.W., and K. L. Schere, 2006: Review of the Governing Equations, Computational Algorithms, and Other Components of the Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. Applied Mechanics Reviews, Volume 59, Number 2 (March 2006), pp. 51-77. 2 U.S. Environmental Protection Agency, User’s Guide for the AMS/EPA Regulatory Model AERMOD, EPA-454/B-03-001, September 2004. 3 Tesche T. W., D.E. McNally, and C. Tremback, (2002), “Operational evaluation of the MM5 meteorological model over the continental United States: Protocol for annual and episodic evaluation.” Submitted to USEPA as part of Task Order 4TCG-68027015. (July 2002) 4- U.S. Environmental Protection Agency, Guidance on the Use of Models and Other Analyses for Demonstrating Attainment of Air Quality Goals for Ozone, PM2.5, and Regional Haze, Draft 3.2, September 2006. 5 Grell, G.A., J. Dudhia and D.R. Stauffer, (1994), “A Description of the Fifth-Generation Penn State/NCAR Mesoscale Model (MM5)”, NCAR/TN-398+STR, 138 pp. 4. Sample Local-Scale Operational Evaluation Results (3 locations: Birmingham, Detroit, and Seattle) Moisture 2001 – 36 km 2002 – 12 km2002 – 36 km2001 – 12 km Comparison of MM5 predictions vs. NWS observations Temperature: (3 sites, by quarter, 2002 36km MM5) 0 0.5 1 1.5 2 2.5 3 3.5 4 -4-3-201234 Bias (K) Error (K) DET 1Q DET 2Q DET 3Q DET 4Q BHM 1Q BHM 2Q BHM 3Q BHM 4Q SEA 1Q SEA 2Q SEA 3Q SEA 4Q Temperature Wind Direction Wind Speed 5. Sample Phenomenological Evaluation Results (2002 12km MM5) a) Observed vs. Modeled Precipitation (May 2002) b) Seasonally-averaged vertical profiles: Model vs. Obis (GSO) c) Detailed temperature performance d) Diurnal temperature performance Winte r Summe r 6. “Key Site” Evaluation Results Northern Indiana – August 3, 2002 (12 km MM5) Cincinnati, OH – Summer 2002 (12km MM5) Note: 2-meter temperature (T), mixing ratio (Q), wind speed (WS), and wind direction (WD) are in units of: K, g/kg, m/s, and degrees, respectively.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.