Modeling spatially-dependent, non-stationary bias in GEOS-CHEM

Slides:



Advertisements
Similar presentations
Statistical Techniques I EXST7005 Start here Measures of Dispersion.
Advertisements

Initial evaluation of 2011 CMAQ and CAMx simulations during the DISCOVER-AQ period in the mid-Atlantic 13 th Annual CMAS Conference: 10/28/14 Pat Dolwick,
Heather Simon, Adam Reff, Benjamin Wells, Neil Frank Office of Air Quality Planning and Standards, US EPA Ozone Trends Across the United States over a.
Tim Smyth and Jamie Shutler Assessment of analysis and forecast skill Assessment using satellite data.
Diurnal Variability of Aerosols Observed by Ground-based Networks Qian Tan (USRA), Mian Chin (GSFC), Jack Summers (EPA), Tom Eck (GSFC), Hongbin Yu (UMD),
An Assessment of CMAQ with TEOM Measurements over the Eastern US Michael Ku, Chris Hogrefe, Kevin Civerolo, and Gopal Sistla PM Model Performance Workshop,
Three-State Air Quality Study (3SAQS) Three-State Data Warehouse (3SDW) 2008 CAMx Modeling Model Performance Evaluation Summary University of North Carolina.
Maureen Meadows Senior Lecturer in Management, Open University Business School.
Five-Number Summary 1 Smallest Value 2 First Quartile 3 Median 4
Traffic modeling and Prediction ----Linear Models
CMAS Conference, October 16 – 18, 2006 The work presented here was performed by the New York State Department of Environmental Conservation with partial.
OMI HCHO columns Jan 2006Jul 2006 Policy-relevant background (PRB) ozone calculations for the EPA ISA and REA Zhang, L., D.J. Jacob, N.V. Smith-Downey,
Atlantic Multidecadal Variability and Its Climate Impacts in CMIP3 Models and Observations Mingfang Ting With Yochanan Kushnir, Richard Seager, Cuihua.
Review of Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
Lecture 5 Model Evaluation. Elements of Model evaluation l Goodness of fit l Prediction Error l Bias l Outliers and patterns in residuals.
Chapter 22: Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
C. Hogrefe 1,2, W. Hao 2, E.E. Zalewsky 2, J.-Y. Ku 2, B. Lynn 3, C. Rosenzweig 4, M. Schultz 5, S. Rast 6, M. Newchurch 7, L. Wang 7, P.L. Kinney 8, and.
Presented at the 7th Annual CMAS Conference, Chapel Hill, NC, October 6-8, 2008 Identifying Optimal Temporal Scale for the Correlation of AOD and Ground.
Diagnosing the sensitivity of O 3 air quality to climate change over the United States Moeko Yoshitomi Daniel J. Jacob, Loretta.
Introduction to Models Lecture 8 February 22, 2005.
Evaluating temporal and spatial O 3 and PM 2.5 patterns simulated during an annual CMAQ application over the continental U.S. Evaluating temporal and spatial.
Past and Projected Changes in Continental-Scale Agro-Climate Indices Adam Terando NC Cooperative Research Unit North Carolina State University 2009 NPN.
Technical Details of Network Assessment Methodology: Concentration Estimation Uncertainty Area of Station Sampling Zone Population in Station Sampling.
Evaluation of CMAQ Driven by Downscaled Historical Meteorological Fields Karl Seltzer 1, Chris Nolte 2, Tanya Spero 2, Wyat Appel 2, Jia Xing 2 14th Annual.
Geology 5600/6600 Signal Analysis 14 Sep 2015 © A.R. Lowry 2015 Last time: A stationary process has statistical properties that are time-invariant; a wide-sense.
Impact of Temporal Fluctuations in Power Plant Emissions on Air Quality Forecasts Prakash Doraiswamy 1, Christian Hogrefe 1,2, Eric Zalewsky 2, Winston.
Skill of Generalized Additive Model to Detect PM 2.5 Health Signal in the Presence of Confounding Variables Office of Research and Development Garmisch.
CORRELATION. Correlation  If two variables vary in such a way that movement in one is accompanied by the movement in other, the variables are said to.
Daiwen Kang 1, Rohit Mathur 2, S. Trivikrama Rao 2 1 Science and Technology Corporation 2 Atmospheric Sciences Modeling Division ARL/NOAA NERL/U.S. EPA.
Descriptive Statistics ( )
N Engl J Med Jun 29;376(26): doi: 10
Why Model? Make predictions or forecasts where we don’t have data.
Using satellite data and data fusion techniques
Adverse Effects of Drought on Air Quality in the US
Predicting PM2.5 Concentrations that Result from Compliance with National Ambient Air Quality Standards (NAAQS) James T. Kelly, Adam Reff, and Brett Gantt.
Systematic timing errors in km-scale NWP precipitation forecasts
Statistical Methods for Model Evaluation – Moving Beyond the Comparison of Matched Observations and Output for Model Grid Cells Kristen M. Foley1, Jenise.
Productivity Growth and Convergence: Theory and Evidence
Two Decades of WRF/CMAQ simulations over the continental U. S
What is development? Domains of development
Emission and Air Quality Trends Review
Ninth grade students in an English class were surveyed to find out about how many times during the last year they saw a movie in a theater. The results.
17th Annual CMAS Conference, Chapel Hill, NC
Peter Zoogman, Daniel Jacob
Predicting Future-Year Ozone Concentrations: Integrated Observational-Modeling Approach for Probabilistic Evaluation of the Efficacy of Emission Control.
AQMEII3: the EU and NA regional scale program of the Hemispheric Transport of Air Pollution Task Force The AQMEII 3 modelling team S. Galmarini, C. Hogrefe,
Correlations: testing linear relationships between two metric variables Lecture 18:
Emission and Air Quality Trends Review
Development of a 2007-Based Air Quality Modeling Platform
Dr. Everette S. Gardner, Jr.
Emission and Air Quality Trends Review
Emission and Air Quality Trends Review
Transboundary influences on US background ozone
Tiger Team project: Processes contributing to model differences in North American background ozone estimates AQAST PIs: Arlene Fiore (Columbia/LDEO) and.
Emission and Air Quality Trends Review
Emission and Air Quality Trends Review
Emission and Air Quality Trends Review
Emission and Air Quality Trends Review
Facultad de Ingeniería, Centro de Cálculo
Emission and Air Quality Trends Review
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
Emission and Air Quality Trends Review
Emission and Air Quality Trends Review
2019 TEMPO Science Team Meeting
Off-line 3DVAR NOx emission constraints
Propagation of Error Berlin Chen
Emission and Air Quality Trends Review
Emission and Air Quality Trends Review
Forecasting Plays an important role in many industries
Presentation transcript:

Modeling spatially-dependent, non-stationary bias in GEOS-CHEM Halil Cakir1, Benjamin Wells1, Pat Dolwick1, Joseph Pinto2, Lin Zhang2 1US EPA, OAQPS, RTP, NC 27711; 2US EPA, NCEA, RTP, NC 27711; 3Harvard University, Div. of Appl. Sci. and Eng., Cambridge, MA 02138;

Model Evaluation Historically model performance has been evaluated using domain wide metrics, such as the mean fractional bias, or by comparison of time series at individual monitoring sites. Useful information can be obtained by examining spatial patterns across model domains - spatial extent of areas of agreement or disagreement - nature of inputs, physical/ chemical processes that could affect results Question now is what indices would be most useful.

Synthetic data Set and the Index Evaluation Indexes for Model Evaluation Synthetic data Set and the Index Evaluation

Indexes for Model Evaluation Coefficient of Divergence (COD) (Wongphatarakul, Friedlander, and Pinto, 1998). Q-Index (Wang and Bovik, 2002) Index of Agreement (IOA) (Wilmott et al. 1981, 1985, and 2011) M (Watterson, 1996) Index of Model Integrity

Indexes for Model Evaluation Coefficient of Divergence (COD) (Wongphatarakul, Friedlander, and Pinto, 1998). where the predicted values (P) of the model are compared with the observed values (0) measured by the monitoring network for i samples.

Indexes for Model Evaluation Q-Index (Wang and Bovik, 2002) Can be rewritten as a product of three components: where σxy is the covariance between monitor and modeled signals (or readings). µ0(x) and µ0(y) are the means, and σ2x and σ2y the variances of the monitor and modeled signals respectively. The dynamic range of Q is [-1, 1].

Indexes for Model Evaluation Index of Agreement (IOA) (Wilmott et al. 1981, 1985, and 2011)

Indexes for Model Evaluation M (Watterson, 1996)

Indexes for Model Evaluation Index of Model Integrity (IMI)

Synthetic Data Set A synthetic data set was created to evaluate various indexes’ performance. 2006 hourly ozone readings were used to create the synthetic dataset. First, 1 and 5 ppb were added to monitor readings to create three mock models with constant bias on temporal profile. Second, seasonal bias (1 and 5 ppb) was added in a way to make yearly average bias zero. Third, completely random values were added (ranging between -2 and 2; and -10 and 10) to create 2 additional synthetic models. Finally we added multiplicative (5% and 25%) bias into monitor reading for the final synthetic set.

Synthetic Data Set Actual 2006 hourly ozone readings from the CASTNET monitor ABT147

Synthetic Data Set +1 and +5 ppb added to actual 2006 hourly ozone readings from the CASTNET monitor ABT147 to create first set of synthetic model estimates. Above graph shows only the first 100 hours of readings from the monitor and the synthetic model with +5 ppb bias .

Synthetic Data Set Synthetic data set with +- 5 ppb seasonal bias but 0 ppb yearly bias. Above graph shows 100 hours cross-section where negative seasonal bias changes into positive seasonal bias halfway. Out of 4 seasons, two were with +5 ppb and the other two were with -5 ppb bias, making yearly bias equals to zero.

Synthetic Data Set Synthetic data set with 25% multiplicative bias.

Synthetic Data Set Synthetic data set created by adding random values ranging between -10 and +10 ppb to monitor readings. (Note that yearly bias (approximately) equals to zero for this data set as well).

Spatial patterns GEOS-Chem evaluation

Hourly Stats 2006 Overall Mean Bias: -3.3 ppb Overall Obs. Mean: 33.8 ppb Overall GEOS-CHEM Mean:37.1 ppb

Hourly Stats 2006 2007 All Years: 2006 to 2008 2008

Hourly Stats

Daily 8-Hour Max. Stats

Daily 8-Hour Max. Stats

Summary Large scale patterns metrics for model evaluation are found largest areas of disagreement are in the Southeast, the Mid- Atlantic and in the Northwest. analysis shows inter-annual variability in model performance. IMI and COD appear too be most useful for providing information about model agreement and disagreement. Pearson R, Q index, and IOA not especially sensitive to changes in model performance. Next steps will involve examining causes for large scale patterns.

Questions

Index performances Observation (Site ABT147) Additive Bias +1 ppb Seasonal Bias -/+1 ppb -/+5 ppb Multiplicative Bias 5% 25% Random Noise +/-2 +/-10 MEAN 31.223 32.223 36.223 31.224 31.244 32.801 39.154 31.199 31.203 STD DEV 13.439 13.405 13.964 14.117 16.810 13.475 14.539 BIAS 1.000 5.000 0.001 0.021 1.578 7.931 -0.023 -0.019 COVARIANCE 180.575 179.620 175.328 189.651 225.842 180.323 179.284 CORRELATION 0.997 0.934 0.996 0.918 Q INDEX 0.999 0.989 0.951 0.915 1 – “Coefficient of Divergence (COD)” 0.940 0.878 0.927 0.848 0.975 0.887 0.935 0.836 Index of Agreement (IOA) OLD 0.965 0.966 0.919 0.998 0.956 Index of Agreement (IOA) NEW 0.757 0.952 0.758 0.923 0.615 M INDEX 0.953 0.770 0.767 0.920 0.657 0.942 0.735 1 – “Relative RMSE” 0.968 0.840 0.944 0.724 0.961 0.815 Index of Model Integrity (IMI) 0.785 0.957 0.748

Hourly Stats

Daily 8-Hour Max. Stats

2006 2007 2008 All years: 2006 to 2008