Five-year Progress in the Performance of Air Quality Forecast Models: Analysis on Categorical Statistics for the National Air Quality Forecast Capacity.

Slides:



Advertisements
Similar presentations
Analysis of CMAQ Performance and Grid-to- grid Variability Over 12-km and 4-km Spacing Domains within the Houston airshed Daiwen Kang Computer Science.
Advertisements

A PERFORMANCE EVALUATION OF THE ETA - CMAQ AIR QUALITY FORECAST MODEL FOR THE SUMMER OF 2004 CMAS Workshop Chapel Hill, NC 20 October, 2004.
Office of Research and Development National Exposure Research Laboratory Photo image area measures 2” H x 6.93” W and can be masked by a collage strip.
Fighting the Great Challenges in Large-scale Environmental Modelling I. Dimov n Great challenges in environmental modelling n Impact of climatic changes.
U.S. EPA Office of Research & Development October 30, 2013 Prakash V. Bhave, Mary K. McCabe, Valerie C. Garcia Atmospheric Modeling & Analysis Division.
Cost-effective dynamical downscaling: An illustration of downscaling CESM with the WRF model Jared H. Bowden and Saravanan Arunachalam 11 th Annual CMAS.
Application and Analysis of Kolmogorov- Zurbenko Filter in the Dynamic Evaluation of a Regional Air Quality Model Daiwen Kang Computer Science Corporation,
Modeled Trends in Impacts of Landing and Takeoff Aircraft Emissions on Surface Air-Quality in U.S for 2005, 2010 and 2018 Lakshmi Pradeepa Vennam 1, Saravanan.
Jenny Stocker, Christina Hood, David Carruthers, Martin Seaton, Kate Johnson, Jimmy Fung The Development and Evaluation of an Automated System for Nesting.
1 icfi.com | 1 HIGH-RESOLUTION AIR QUALITY MODELING OF NEW YORK CITY TO ASSESS THE EFFECTS OF CHANGES IN FUELS FOR BOILERS AND POWER GENERATION 13 th Annual.
Operational Air Quality and Source Contribution Forecasting in Georgia Georgia Institute of Technology Yongtao Hu 1, M. Talat Odman 1, Michael E. Chang.
1 Communicating air quality trade-offs. Module 6. Communicating Air Quality to the Public in the Mid-Atlantic United States by K.G. Paterson, Ph.D., P.E.
CMAS Conference, October 16 – 18, 2006 The work presented here was performed by the New York State Department of Environmental Conservation with partial.
Developing a High Spatial Resolution Aerosol Optical Depth Product Using MODIS Data to Evaluate Aerosol During Large Wildfire Events STI-5701 Jennifer.
MODELS3 – IMPROVE – PM/FRM: Comparison of Time-Averaged Concentrations R. B. Husar S. R. Falke 1 and B. S. Schichtel 2 Center for Air Pollution Impact.
Comparison of three photochemical mechanisms (CB4, CB05, SAPRC99) for the Eta-CMAQ air quality forecast model for O 3 during the 2004 ICARTT study Shaocai.
Sensitivity of top-down correction of 2004 black carbon emissions inventory in the United States to rural-sites versus urban-sites observational networks.
TPB Technical Committee May 7, Forecast and Ozone Action Day Program.
PM2.5 Model Performance Evaluation- Purpose and Goals PM Model Evaluation Workshop February 10, 2004 Chapel Hill, NC Brian Timin EPA/OAQPS.
Overview What we’ll cover: Key questions Next steps
AQI Trends in the San Joaquin Valley Evan M. Shipp Shipp Air Quality Consulting.
25 June 2009, London Impact significance in air quality assessment Application of EPUK criteria to road schemes?
Impacts of Biomass Burning Emissions on Air Quality and Public Health in the United States Daniel Tong $, Rohit Mathur +, George Pouliot +, Kenneth Schere.
1 Neil Wheeler, Kenneth Craig, and Clinton MacDonald Sonoma Technology, Inc. Petaluma, California Presented at the Sixth Annual Community Modeling and.
Improving Air Quality Communication with Advanced Mapping and Data Distribution Techniques Scott A. Jackson¹, P. H. Zahn², C. P. MacDonald², D. S. Miller²,
Ambient Air Monitoring Networks 2010 CMAS Conference Chapel Hill, NC October 13, 2010 Rich Scheffe, Sharon Phillips, Wyatt Appel, Lew Weinstock, Tim Hanley,
On the Model’s Ability to Capture Key Measures Relevant to Air Quality Policies through Analysis of Multi-Year O 3 Observations and CMAQ Simulations Daiwen.
Georgia Environmental Protection Division IMPACTS OF MODELING CHOICES ON RELATIVE RESPONSE FACTORS IN ATLANTA, GA Byeong-Uk Kim, Maudood Khan, Amit Marmur,
An Experiment to Evaluate the Use of Quantitative Precipitation Forecasts from Numerical Guidance by Operational Forecasters Joshua M. Boustead and Daniel.
CMAS Conference, October 6 – 8, 2008 The work presented in this paper was performed by the New York State Department of Environmental Conservation with.
Air Quality Forecasting Program and Ozone Season 2004 COG Board of Directors May 12, 2004 Joan Rohlfs, Chief Air Quality Planning.
Assimilating AIRNOW Ozone Observations into CMAQ Model to Improve Ozone Forecasts Tianfeng Chai 1, Rohit Mathur 2, David Wong 2, Daiwen Kang 1, Hsin-mu.
A detailed evaluation of the WRF-CMAQ forecast model performance for O 3, and PM 2.5 during the 2006 TexAQS/GoMACCS study Shaocai Yu $, Rohit Mathur +,
1 Communicating air quality trade-offs. Module 6. Communicating Air Quality to the Public in the Mid-Atlantic United States.
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling and Analysis Division Using Dynamical Downscaling to Project.
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling and Analysis Division Office of Research and Development.
Continued improvements of air quality forecasting through emission adjustments using surface and satellite data & Estimating fire emissions: satellite.
U.S. EPA and WIST Rob Gilliam *NOAA/**U.S. EPA
William G. Benjey* Physical Scientist NOAA Air Resources Laboratory Atmospheric Sciences Modeling Division Research Triangle Park, NC Fifth Annual CMAS.
A N EW H AMPSHIRE G ROUND -L EVEL O ZONE P OLLUTION F ORECASTING T OOL U SING M ETEOROLOGICAL C RITERIA Northeast Regional Operational Workshop Presenter:
1 Aika Yano, Yongtao Hu, M. Talat Odman, Armistead Russell Georgia Institute of Technology October 15, th annual CMAS conference.
Georgia Institute of Technology Comprehensive evaluation on air quality forecasting ability of Hi-Res in southeastern United States Yongtao Hu 1, M. Talat.
May 22, UNDERSTANDING THE EFFECTIVENESS OF PRECURSOR REDUCTIONS IN LOWERING 8-HOUR OZONE CONCENTRATIONS Steve Reynolds Charles Blanchard Envair 12.
Diagnostic Study on Fine Particulate Matter Predictions of CMAQ in the Southeastern U.S. Ping Liu and Yang Zhang North Carolina State University, Raleigh,
Robert W. Pinder, Alice B. Gilliland, Robert C. Gilliam, K. Wyat Appel Atmospheric Modeling Division, NOAA Air Resources Laboratory, in partnership with.
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling and Analysis Division October 21, 2009 Evaluation of CMAQ.
Evaluation of CMAQ Driven by Downscaled Historical Meteorological Fields Karl Seltzer 1, Chris Nolte 2, Tanya Spero 2, Wyat Appel 2, Jia Xing 2 14th Annual.
Continued improvements of air quality forecasting through emission adjustments using surface and satellite data Georgia Institute of Technology Yongtao.
AN EVALUATION OF THE ETA-CMAQ AIR QUALITY FORECAST MODEL AS PART OF NOAA’S NATIONAL PROGRAM CMAQ AIRNOW AIRNOW Brian Eder* Daiwen Kang * Ken Schere* Ken.
Ozone Data and Outreach Stan Belone Salt River Pima-Maricopa Indian Community Air Quality Program.
Operational Evaluation and Model Response Comparison of CAMx and CMAQ for Ozone & PM2.5 Kirk Baker, Brian Timin, Sharon Phillips U.S. Environmental Protection.
Our Vision – Healthy Kansans living in safe and sustainable environments. National Tribal Air Association Conference on Air Quality Environmental Health.
Impact of Temporal Fluctuations in Power Plant Emissions on Air Quality Forecasts Prakash Doraiswamy 1, Christian Hogrefe 1,2, Eric Zalewsky 2, Winston.
Georgia Institute of Technology Evaluation of the 2006 Air Quality Forecasting Operation in Georgia Talat Odman, Yongtao Hu, Ted Russell School of Civil.
WMO: GAW Urban Research Meteorology and Environment Project -- GURME Chemical Weather – A New Challenge/Opportunity For Weather And Other Services Evolving.
Daiwen Kang 1, Rohit Mathur 2, S. Trivikrama Rao 2 1 Science and Technology Corporation 2 Atmospheric Sciences Modeling Division ARL/NOAA NERL/U.S. EPA.
Quality of Electronic Emergency Department Data: How Good Are They?
Predicting PM2.5 Concentrations that Result from Compliance with National Ambient Air Quality Standards (NAAQS) James T. Kelly, Adam Reff, and Brett Gantt.
Statistical Methods for Model Evaluation – Moving Beyond the Comparison of Matched Observations and Output for Model Grid Cells Kristen M. Foley1, Jenise.
A Performance Evaluation of Lightning-NO Algorithms in CMAQ
Two Decades of WRF/CMAQ simulations over the continental U. S
Emission and Air Quality Trends Review
Measurement of Chrysotile Fiber Retention Efficiencies on MCE Filters to Support Exposure Assessments Daniel A. Vallero, U.S. EPA/NERL, RTP, NC John R.
Predicting Future-Year Ozone Concentrations: Integrated Observational-Modeling Approach for Probabilistic Evaluation of the Efficacy of Emission Control.
Impact of GOES Enhanced WRF Fields on Air Quality Model Performance
Using CMAQ to Interpolate Among CASTNET Measurements
PMcoarse , Monitoring Budgets, and AQI
What forecast users might expect: an issue of forecast performance
J. Burke1, K. Wesson2, W. Appel1, A. Vette1, R. Williams1
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
Presentation transcript:

Five-year Progress in the Performance of Air Quality Forecast Models: Analysis on Categorical Statistics for the National Air Quality Forecast Capacity (NAQFC) Daiwen Kang 1, Rohit Mathur 2, Brian Eder 2, Kenneth Schere 2, and S. Trivikrama Rao 2 1 Computer Science Corporation 2 Atmospheric Modeling and Analysis Division NERL/U.S. EPA 8 th Annual CMAS Conference, Chapel Hill, NC, October 19 – 21, 2009

Motivations Assess the progress in performance improvements for categorical metrics of the NAQFC system for O 3 forecasts over the past 5 years Identify categorical metrics that can well characterize AQF performance for categorical forecasts Assess AQI-based categorical performances Propose guidelines for AQF categorical evaluations based on the analysis of KF bias- adjusted forecasts and human forecasts.

Traditional Categorical Metrics Observed Exceedances & Non-Exceedances versus versus Forecast Exceedances & Non-Exceedances Forecast Exceedances & Non-Exceedances ab cd Forecast Exceedance No Yes Observed Exceedance a b c d Observation Forecast

AQI Definition and Categories Air Quality Index (AQI) Values Levels of Health Concern Colors When the AQI is in this range:...air quality conditions are:...as symbolized by this color: 0 to 50GoodGreen (1) 51 to 100ModerateYellow (2) 101 to 150Unhealthy for Sensitive Groups Orange (3) 151 to 200UnhealthyRed (4) 201 to 300Very UnhealthyPurple (5) 301 to 500HazardousMaroon (6) Where: I p = the index for pollutant p (O 3 in this case) C p = the rounded concentration of pollutant p BP Hi = the breakpoint that is ≥ C p BP Lo = the breakpoint that is ≤ C p I Hi = the AQI value corresponding to BP Hi I Lo = the AQI value corresponding to BP Lo

AQI-based Metrics Definition where i is the AQI index category (1, 2, 3, 4, 5) or the color scheme (green, yellow, orange, red, purple), and are the number of observed and forecast instances in the ith category, respectively, is the correctly forecast instances in the ith category, and is the total number of records.

Categorical Stats over 3x domain (1) Accuracy (A) Bias (B) The accuracy is always high (>90%) because the correctly forecast non- exceedence points dominate. Bias indicates that the model has always over estimated execeedences through the years.

Categorical Stats over 3x domain (2) eFAR eH False alarm ratios are quite high across all the years ranging from 70 to 90% on average. Mean hit rates are generally greater than 40% except in the year of 2006; during 2006, a big transition for the meteorology model was made from Eta to WRF.

Categorical Stats over 3x domain (3) Critical Success Index (CSI) Critical success index reflects the combination of false alarm ratio and hit rate. A forecast system can have both high FAR and high H or low FAR and low H, both resulting in low CSI. High CSI values indicate moderate FAR and reasonable H.

Metropolitan Statistical Area (MSA) Local forecasters generally forecast the maximum AQI value that they expect to occur anywhere within an MSA; and then verify this forecast with the maximum monitored value within that area. Here is an example of Charlotte MSA that is comprised of 8 counties, 7 in NC, 1 in SC. There are 8 AQS monitors in those counties, 7 in NC, 1 in SC. And The MSA is represented by 103, 12-km grid cells by the NAQFC. O3O3 AQI

MSAs used in this research Atlanta Charlotte Dallas Houston Washington DC

Kalman Filter Bias-adjustment Kalman Filter (KF) was used to bias-adjust the raw model forecasts for the continental U.S. domain during summer seasons at all locations where AIRNow monitoring data were available. The categorical performance of both raw model and KF forecasts was assessed over: 1. all sites (paired observation-model grid cell) within the domain, 2. sites within all MSAs, and 3. MSA value (the maximum value out of all the sites within the MSA for each day)

Human NAQFC Exceedance Hit Rate Exceedance False Alarm Rate Because the NAQFC is positively biased, it tends to capture a higher percentage of exceedance hit rates, but this also results in a higher percentage of false alarm ratios. The critical success index results were mixed over MSAs, but on average the NAQFC performed better than Human Forecasts. NAQFC Categorical Performance vs. Human Forecast

cH for the raw model and KF forecasts at all sites and MSAs Domain All Sites: All AIRNow sites within the domain are included in the calculation MSA All Sites: All the AIRNow sites which are located in one of the MSAs listed earlier MSA: The maximum values from both AIRNow sites and the model forecasts within each of the MSAs are used to generate the stats.

cCSI for the raw model and KF forecasts at all sites and MSAs

eH for the raw model and KF forecasts at all sites and MSAs The hit rates are significantly increased when evaluated over MSAs compared to over individual sites. KF bias-adjusted forecasts improved hit rate, especially when the raw model was significantly flawed with systematic biases as in 2006.

eFAR for the raw model and KF forecasts at all sites and MSAs False alarm ratios are significantly lower when evaluated over MSAs than over the individual sites. The KF bias-adjusted forecasts significantly reduced FAR for all the situations across all the years.

eCSI for the raw model and KF forecasts at all sites and MSAs eCSI values almost doubled when evaluated over MSAs compared to those evaluated over the individual sites. The KF bias-adjusted forecasts had larger eCSI values than the raw model forecasts, especially when evaluated over the individual sites.

oH for the raw model and KF forecasts at all sites and MSAs The overall hit rates were consistent and stable and slowly improving over the years for both the KF and raw model forecasts. KF forecasts always had larger oH values than the raw model. oH values decreased when evaluated over MSAs (but still > 50%) due to overestimation at low AQIs compared to those evaluated over individual sites.

oCSI for the raw model and KF forecasts at all sites and MSAs The overall critical success index (oCSI) is quite consistent and increases over the years. The oCSI values are lower when evaluated over MSAs than over individual site because the MSA values are the maximum of all the sites within the MSA resulting in lower hit rate for low AQI values (overestimate low AQI).

Minimum values of H and CSI during the years over the continental US domain and MSAs Stats Type eH (%) oH (%) eCSI (%) oCSI (%) Raw Model KF Raw Model KF Raw Model KF Raw Model KF All Sites MSA (1)MSA based analysis provides a more objective assessment of the practical use of the guidance, consistent with the way local forecasts are typically developed; (2) Bias-adjustment further improves the predictive skill of the system thereby improving the utility of the forecast products.

Guidelines for AQF models Stats Type eH (%) oH (%) eCSI (%) oCSI (%) All Sites MSA50 30 These guideline values are in between the minimum values (rounded) of raw model and the KF-adjusted forecasts, which set (1) as targets for what the raw models can realistically achieve as a result of model improvements in the short term; (2) as a reference that any AQF models should perform when combined with KF-adjustment.

Conclusions Comparisons indicate that the NAQFC performed at least as well as, if not better than, the human forecasts over MSAs. The categorical performance of NAQFC has been consistent and stable over the years from 2005 to 2008, with the exception in 2006 when the model underwent significant changes resulting in degraded categorical performance. Kalman filter bias-adjustment resulted in improvement over almost all categorical statistics, especially when the raw model was systematically biased in 2006.

Conclusions Hit Rate (H), False Alarm Ratio (FAR), and Critical Success Index (CSI) are three most appropriate metrics to gauge the categorical performance of an AQF; CSI is even better than H and FAR, because it reflects the combination of H and FAR. The AQI based H and CSI over all sites and MSAs are good indicators of overall performance for categorical forecasts. Based on the analysis in this study, the following guidelines are proposed: eH >= 30%, eCSI >= 20%, oH and oCSI >= 50% for all sites; eH and oH >= 50%, eCSI and oCSI >= 30% for MSAs.

Acknowledgements The authors would like to thank the NOAA/EPA air quality forecast program and the EPA’s AIRNow program for providing forecasted and observed O 3 data. Thanks also goes to Scott Jackson for providing the Human forecast data. Disclaimer The United States Environmental Protection Agency through its Office of Research and Development funded and managed the research described here. It has been subjected to Agency’s administrative review and approved for presentation.