Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model Performance Metrics, Ambient Data Sets.

Slides:



Advertisements
Similar presentations
Some recent studies using Models-3 Ian Rodgers Presentation to APRIL meeting London 4 th March 2003.
Advertisements

Analysis of CMAQ Performance and Grid-to- grid Variability Over 12-km and 4-km Spacing Domains within the Houston airshed Daiwen Kang Computer Science.
Implementation of the Particle & Precursor Tagging Methodology (PPTM) for the CMAQ Modeling System: Sulfur & Nitrogen Tagging 5 th Annual CMAS Conference.
Regional Haze Modeling: Recent Modeling Results for VISTAS and WRAP October 27, 2003, CMAS Annual Meeting, RTP, NC University of California, Riverside.
Georgia Institute of Technology Evaluation of CMAQ with FAQS Episode of August 11 th -20 th, 2000 Yongtao Hu, M. Talat Odman, Maudood Khan and Armistead.
COMPARATIVE MODEL PERFORMANCE EVALUATION OF CMAQ-VISTAS, CMAQ-MADRID, AND CMAQ-MADRID-APT FOR A NITROGEN DEPOSITION ASSESSMENT OF THE ESCAMBIA BAY, FLORIDA.
Photochemical Model Performance for PM2.5 Sulfate, Nitrate, Ammonium, and pre-cursor species SO2, HNO3, and NH3 at Background Monitor Locations in the.
Biocomplexity Project: N-deposition Model Evaluation UCR, CE-CERT, Air Quality Modeling Group Model Performance Evaluation for San Bernardino Mountains.
An Assessment of CMAQ with TEOM Measurements over the Eastern US Michael Ku, Chris Hogrefe, Kevin Civerolo, and Gopal Sistla PM Model Performance Workshop,
Preliminary Results CMAQ and CMAQ-AIM with SAPRC99 Gail Tonnesen, Chao-Jung Chien, Bo Wang, UC Riverside Max Zhang, Tony Wexler, UC Davis Ralph Morris,
Three-State Air Quality Study (3SAQS) Three-State Data Warehouse (3SDW) 2008 CAMx Modeling Model Performance Evaluation Summary University of North Carolina.
Christian Seigneur AER San Ramon, CA
The AIRPACT-3 Photochemical Air Quality Forecast System: Evaluation and Enhancements Jack Chen, Farren Thorpe, Jeremy Avis, Matt Porter, Joseph Vaughan,
The Visibility Information Exchange Web System (VIEWS): An Approach to Air Quality Data Management and Presentation In a broader sense, VIEWS facilitates.
The AIRPACT-3 Photochemical Air Quality Forecast System: Evaluation and Enhancements Jack Chen, Farren Thorpe, Jeremy Avis, Matt Porter, Joseph Vaughan,
Evaluation of the AIRPACT2 modeling system for the Pacific Northwest Abdullah Mahmud MS Student, CEE Washington State University.
Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Modeling Source Apportionment Gail Tonnesen,
Operational Air Quality and Source Contribution Forecasting in Georgia Georgia Institute of Technology Yongtao Hu 1, M. Talat Odman 1, Michael E. Chang.
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling Division, Applied Modeling Research Branch October 8, 2008.
Model Performance Evaluation Data Base and Software - Application to CENRAP Betty K. Pun, Shu-Yun Chen, Kristen Lohman, Christian Seigneur PM Model Performance.
PM Model Performance Goals and Criteria James W. Boylan Georgia Department of Natural Resources - VISTAS National RPO Modeling Meeting Denver, CO May 26,
University of California Riverside, ENVIRON Corporation, MCNC WRAP Regional Modeling Center WRAP Regional Haze CMAQ 1996 Model Performance and for Section.
Calculating Statistics: Concentration Related Performance Goals James W. Boylan Georgia Department of Natural Resources PM Model Performance Workshop Chapel.
National/Regional Air Quality Modeling Assessment Over China and Taiwan Using Models-3/CMAQ Modeling System Joshua S. Fu 1, Carey Jang 2, David Streets.
MODELS3 – IMPROVE – PM/FRM: Comparison of Time-Averaged Concentrations R. B. Husar S. R. Falke 1 and B. S. Schichtel 2 Center for Air Pollution Impact.
Modeling Studies of Air Quality in the Four Corners Region National Park Service U.S. Department of the Interior Cooperative Institute for Research in.
CMAQ Evaluation Preliminary 2002 version C WRAP 2002 Visibility Modeling: Annual CMAQ Performance Evaluation using Preliminary 2002 version C Emissions.
Center for Environmental Research and Technology University of California, Riverside Bourns College of Engineering Evaluation and Intercomparison of N.
PM2.5 Model Performance Evaluation- Purpose and Goals PM Model Evaluation Workshop February 10, 2004 Chapel Hill, NC Brian Timin EPA/OAQPS.
WRAP Update. Projects Updated 1996 emissions QA procedures New evaluation tools Model updates CB-IV km MM5 Fugitive dust NH 3 emissions Model.
Model Performance Evaluation Database and Software Betty K. Pun, Kristen Lohman, Shu-Yun Chen, and Christian Seigneur AER, San Ramon, CA Presentation at.
Presents:/slides/greg/PSAT_ ppt Effects of Sectional PM Distribution on PM Modeling in the Western US Ralph Morris and Bonyoung Koo ENVIRON International.
1 Using Hemispheric-CMAQ to Provide Initial and Boundary Conditions for Regional Modeling Joshua S. Fu 1, Xinyi Dong 1, Kan Huang 1, and Carey Jang 2 1.
Ozone MPE, TAF Meeting, July 30, 2008 Review of Ozone Performance in WRAP Modeling and Relevance to Future Regional Ozone Planning Gail Tonnesen, Zion.
AoH/MF Meeting, San Diego, CA, Jan 25, 2006 Source Apportionment Modeling Results and RMC Status report Gail Tonnesen, Zion Wang, Mohammad Omary, Chao-Jung.
Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside CMAQ Tagged Species Source Apportionment (TSSA)
Evaluation and Application of Air Quality Model System in Shanghai Qian Wang 1, Qingyan Fu 1, Yufei Zou 1, Yanmin Huang 1, Huxiong Cui 1, Junming Zhao.
A comparison of PM 2.5 simulations over the Eastern United States using CB-IV and RADM2 chemical mechanisms Michael Ku, Kevin Civerolo, and Gopal Sistla.
WRAP Experience: Investigation of Model Biases Uma Shankar, Rohit Mathur and Francis Binkowski MCNC–Environmental Modeling Center Research Triangle Park,
Impacts of MOVES2014 On-Road Mobile Emissions on Air Quality Simulations of the Western U.S. Z. Adelman, M. Omary, D. Yang UNC – Institute for the Environment.
Operational Evaluation and Comparison of CMAQ and REMSAD- An Annual Simulation Brian Timin, Carey Jang, Pat Dolwick, Norm Possiel, Tom Braverman USEPA/OAQPS.
Classificatory performance evaluation of air quality forecasting in Georgia Yongtao Hu 1, M. Talat Odman 1, Michael E. Chang 2 and Armistead G. Russell.
Model & Chemistry Intercomparison CMAQ with CB4, CB4-2002, SAPRC99 Ralph Morris, Steven Lau, Bongyoung Koo ENVIRON International Corporation Gail Tonnesen,
Source Attribution Modeling to Identify Sources of Regional Haze in Western U.S. Class I Areas Gail Tonnesen, EPA Region 8 Pat Brewer, National Park Service.
Evaluation of the VISTAS 2002 CMAQ/CAMx Annual Simulations T. W. Tesche & Dennis McNally -- Alpine Geophysics, LLC Ralph Morris -- ENVIRON Gail Tonnesen.
Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside CMAQ Model Performance Evaluation with the.
GEOS-CHEM Modeling for Boundary Conditions and Natural Background James W. Boylan Georgia Department of Natural Resources - VISTAS National RPO Modeling.
Evaluation of Models-3 CMAQ I. Results from the 2003 Release II. Plans for the 2004 Release Model Evaluation Team Members Prakash Bhave, Robin Dennis,
Evaluating temporal and spatial O 3 and PM 2.5 patterns simulated during an annual CMAQ application over the continental U.S. Evaluating temporal and spatial.
Evaluation of 2002 Multi-pollutant Platform: Air Toxics, Mercury, Ozone, and Particulate Matter US EPA / OAQPS / AQAD / AQMG Sharon Phillips, Kai Wang,
Implementation Workgroup Meeting December 6, 2006 Attribution of Haze Workgroup’s Monitoring Metrics Document Status: 1)2018 Visibility Projections – Alternative.
Report to WESTAR Technical Committee September 20, 2006 The CMAQ Visibility Model Applied To Rural Ozone In The Intermountain West Patrick Barickman Tyler.
Evaluation of CMAQ Driven by Downscaled Historical Meteorological Fields Karl Seltzer 1, Chris Nolte 2, Tanya Spero 2, Wyat Appel 2, Jia Xing 2 14th Annual.
October 1-3, th Annual CMAS Meeting Comparison of CMAQ and CAMx for an Annual Simulation over the South Coast Air Basin Jin Lu 1, Kathleen Fahey.
WRAP Stationary Sources Joint Forum Meeting August 16, 2006 The CMAQ Visibility Model Applied To Rural Ozone In The Intermountain West Patrick Barickman.
WRAP Regional Modeling Center, Attribution of Haze Meeting, Denver 7/22/04 WRAP 2002 Visibility Modeling: Emission, Meteorology Inputs and CMAQ Performance.
AoH/MF Meeting, San Diego, CA, Jan 25, 2006 WRAP 2002 Visibility Modeling: Summary of 2005 Modeling Results Gail Tonnesen, Zion Wang, Mohammad Omary, Chao-Jung.
Three-State Air Quality Study (3SAQS) Three-State Data Warehouse (3SDW) 3SAQS 2011 CAMx Model Performance Evaluation University of North Carolina (UNC-IE)
Evaluation of CAMx: Issues Related to Sectional Models Ralph Morris, Bonyoung Koo, Steve Lau and Greg Yarwood ENVIRON International Corporation Novato,
1 Preliminary evaluation of the 2002 Base B1 CMAQ simulation: Temporal Analysis A more complete statistical evaluation, including diurnal variations, of.
1 Projects:/WRAP_RMC/Presents/ADEQ_Feb ppt Western Regional Air Partnership (WRAP) Regional Modeling Center (RMC) Preliminary Fire Modeling Results.
Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside CCOS 2000 Model Intercomparison: Summary of.
Preliminary Evaluation of the June 2002 Version of CMAQ Brian Eder Shaocai Yu Robin Dennis Jonathan Pleim Ken Schere Atmospheric Modeling Division* National.
WRAP Technical Work Overview
University of Maryland, AOSC Brown Bag Seminar
VISTAS Grid Resolution Sensitivity
Using CMAQ to Interpolate Among CASTNET Measurements
Yongtao Hu, Jaemeen Baek, M. Talat Odman and Armistead G. Russell
Update on 2016 AQ Modeling by EPA
Evaluation of Models-3 CMAQ Annual Simulation Brian Eder, Shaocai Yu, Robin Dennis, Alice Gilliland, Steve Howard,
Presentation transcript:

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model Performance Metrics, Ambient Data Sets and Evaluation Tools USEPA PM Model Evaluation Workshop, RTP, NC February 9-10, 2004 Gail Tonnesen, Chao-Jung Chien, Bo Wang Youjun Qin, Zion Wang, Tiegang Cao

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Acknowledgments Funding from the Western Regional Air Partnership Modeling Forum and VISTAS. Assistance from EPA and others in gaining access to ambient data. 12km Plots and analysis from Jim Boylan at State of Georgia

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Outline UCR Model Evaluation Software –Problems we had to solve Choice of metrics for clean conditions. Judging performance for high resolution nested domains.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Motivation Needed to evaluate model performance for WRAP annual regional haze modeling: –Required a very large number of sites, and days. –For several different ambient monitoring networks Evaluation would be repeated many times: –Many iterations on the “base case” –Several model sensitivity/diagnostic cases to evaluate Limited time and resources were available to complete the evaluation.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Solution Develop model evaluation software to: –Compute 17 statistical metrics for model evaluation. –Generate graphical plots in a variety of formats: –Scatter Plots all sites for one month All sites for full year One site for all days One day for all sites –Time series for each site

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Ambient Monitoring Networks IMPROVE (The Interagency Monitoring of Protected Visual Environments) CASTNET (Clean Air Status and Trend Network) EPA’s AQS (Air Quality System) database EPA’s STN (Speciation Trends Network) NADP (National Atmospheric Deposition Program) SEARCH daily & hourly data PAMS (Photochemical Assessment Monitoring Stations) PM Supersites.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Number of Sites Evaluated by Network

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Overlap Among Monitoring Networks PM25, PM10 O3, SO2 PM25, PM10 O3, NOx, CO, Pb, etc EPA PM Sites Other monitoring stations from state, local agencies O3, NOx VOCs Sp. PM25, Visibility HNO3, NO3, SO4,

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Species Mapping Specify how to compare model with data for each network. Unique species mapping for each air quality model.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Model vs. Obs. Species Mapping Table CompoundIMPROVESEARCHSTNCMAQ Mapping SO4 PCM1_SO4M_SO4ASO4J + ASO4I NO3 PCM1_NO3M_NO3ANO3J + ANO3I NH *SO *NO3 PCM1_NH4M_NH4ANH4J + ANH4I OC 1.4*(OC1+OC2+ OC3+OC4+OP) 1.4*PCM3_OC + 1.4*SAF* BackupPCM3_ OC OCM_adjAORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI EC EC1+EC2 + EC3- OP PCM3_ECEC_NIOSHAECJ + AECI SOIL 2.2*Al *Si *Ca *Fe *Ti PM25_MajorM etalOxides CrustalA25I +A25J CM MT – FMACORS + ASEAS + ASOIL PM25 a FM TEOM_Mass pm2_5frm or pm2_5mass ASO4J + ASO4I + ANO3J + ANO3I + ANH4J + ANH4I + AORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI + AECJ + AECI + A25J + A25I PM10 MT ASO4J + ASO4I + ANO3J + ANO3I + ANH4J + ANH4I + AORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI + AECJ + AECI + A25J + A25I + ACORS + ASEAS + ASOIL Bext_Recon (1/Mm) 10 b + 3*f(RH) c (1.375*SO *NO3) + 4*OC + 10*EC + SOIL + 0.6*CM 10 b + 3*f(RH) c [1.375*(ASO4J + ASO4I) *(ANO3J + ANO3I)] + 4*1.4*(AORGAJ + AORGAI + AORGPAJ + AORGPAI + AORGBJ + AORGBI) + 10*(AECJ + AECI) + 1*(A25J + A25I) + 0.6*(ACORS + ASEAS + ASOIL)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Gaseous compounds, wet deposition, and others CompoundAQSNADPCASTNetCMAQ Mapping O3, ppmv O3 CO, ppmv CO NO2, ppmv NO2 SO2, ppmv SO2 SO2, ug/m3 Total_SO *DENS*SO2 HNO3, ug/m3 NHNO *DENS*HNO3 Total_NO3, ug/m3 Total_NO3ANO3J + ANO3I *2211.5*DENS*HNO3 SO4_wdep, kg/ha WSO4ASO4J + ASO4I (from WDEP1) NO3_wdep, kg/ha WNO3ANO3J + ANO3I (from WDEP1) NH4_wdep, kg/ha WNH4ANH4J + ANH4I (from WDEP1)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside No EPA guidance available for PM. Everyone has their personal favorite metric. Several metrics are non-symmetric about zero causing over predictions to be exaggerated compared to under-predictions. Is coefficient of determination (R 2 ) a useful metric? Recommended Performance Metrics?

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Statistical measures used in model performance evaluation Measure Mathematical Expression Notation Accuracy of unpaired peak (Au) O peak = peak observation; P u peak = unpaired peak prediction within 2 grid cells of peak observation site Accuracy of paired peak (Ap) P = paired in time and space peak prediction Coefficient of determination Pi = prediction at time and location i; Oi =observation at time and location i; =arithmetic average of Pi, i=1,2,…, N; =arithmetic average of Oi, i=1,2,…,N Normalized Mean Error (NME) Reported as % Root Mean Square Error (RMSE) Mean Absolute Gross Error (MAGE)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside MeasureMathematical ExpressionNotation Fractional Gross Error (FE) Reported as % Mean Normalized Gross Error (MNGE) Reported as % Mean Bias (MB) Mean Normalized Bias (MNB) Reported as % Mean Fractionalized Bias (Fractional Bias, MFB) Reported as % Normalized Mean Bias (NMB) Reported as % Bias Factor (BF) Bias Factor = 1 + MNB; Reported as ratio notation (prediction : observation)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Mean Normalized Bias (MNB) from -100% to + inf. Normalized Mean Bias (NMB) from -100% to + inf. Fractional Bias (FB) from –200% to +200% Fractional Error (FE) from 0% to +200% Bias Factor (Knipping ratio) is MNB + 1, reported as a ratio, for example: – 4:1 for over prediction – 1:4 for under-prediction. Most Used Metrics

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside UCR Java-based AQM Evaluation Tools

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside UCR Java-based AQM Evaluation Tools

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside SAPRC99 vs. CB4 NO3; IMPROVE FE%FB% SAPRC CB CB cross comparisons

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside SAPRC99 vs. CB4 SO4; IMPROVE FE%FB% SAPRC CB CB cross comparisons

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Time series plot for CMAQ vs. CAMx at SEARCH site – JST (Jefferson St.)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside 1 With 60 ppb ambient cutoff 2 Using 3*elemental sulfur 3 No data available in WRAP domain 4 Measurements available at 3 sites

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Viewing Spatial Patterns Problem: Model performance metrics and time-series plots do not identify cases where the model is “off by one grid cell”. Process ambient data in the I/O API format so that data can be compared to model using PAVE.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SO4, Jan 5

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SO4, June 10

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE NO3, Jan 5

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE NO3, July 1

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SOA, Jan 5

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside IMPROVE SOA, June 25

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Spatially Weighted Metrics PAVE plots qualitatively indicate error relative to spatial patterns, but do we also need to quantify this? –Wind error of 30 degrees can cause model to miss peak by one or more grid cells. –Interpolate model using surrounding grid cells? –Use average of adjacent grid cells? –Within what distance?

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Judging Model Performance Many plots and metrics – but what is the bottom line? Need to stratify the data for model evaluation –Evaluate seasonal performance. –Group by related types of sites. –Judge model for each site or similar groups. –How best to group or stratify sites? Want to avoid wasting time analyzing plots and metrics that are not useful.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside 12km vs. 36km, Winter SO4 FB% 36km-35 12km-39

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside FB% 36km-34 12km-13 12km vs. 36km, Winter NO3

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Recommended Evaluation for Nests Comparing performance metrics is not enough: –Performance metrics show mixed response. –Possible for better model to have poorer metrics Diagnostic analysis is needed to compare nested grid to coarse grid model.

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Example Diagnostic Analysis Some sites had worse metrics for 12km. Analysis by Jim Boylan comparing differences in 12 km and 36 km results showed major effects from: –Regional precipitation –Regional transport (wind speed & direction) –Plume definition

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate Change (36 km – 12 km)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Wet Sulfate on July 9 at 01:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Regional Transport (Wind Speed)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 05:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 06:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 07:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 9 at 08:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Plume Definition and Artificial Diffusion

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 00:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 06:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 09:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 12:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 16:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 10 at 21:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate on July 11 at 00:00 36 km Grid 12 km Grid

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Sulfate Change (36 km – 12 km)

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Nested Grid Recommendations Diagnostic evaluation is needed to judge nested grid performance. Coarse grid might have compensating errors that produce better performance metrics. Diagnostic evaluation is resource extensive. Should we just assume that higher resolution implies better physics?

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Conclusions – Key Issues Air quality models should include a model evaluation module that produces performance plots and metrics. Recommend bias factor as best metric for haze. Much more work needed to address error relative to spatial patterns. If different models have similar error use the model with the best science (even if more computationally expensive).

Center for Environmental Research and Technology/Air Quality Modeling University of California at Riverside Additional Work on Evaluation Tools Need to adapt evaluation software for PAMS and PM Supersites. Develop GUI to facilitate viewing of plots, include an open source tools for spatial animations. Develop software to produce more useful plots, e.g., contour plots of bias and error.