Best practices to evaluate land change models Robert Gilmore Pontius Jr Clark University, USA 1.

Slides:



Advertisements
Similar presentations
Climate Modeling LaboratoryMEASNC State University An Extended Procedure for Implementing the Relative Operating Characteristic Graphical Method Robert.
Advertisements

Katherine Jenny Thompson
Parametric measures to estimate and predict performance of identification techniques Amos Y. Johnson & Aaron Bobick STATISTICAL METHODS FOR COMPUTATIONAL.
Chapter 9 Introduction to the t-statistic
Lecture 3 Validity of screening and diagnostic tests
Intensity Analysis To Analyze Land Change
Design of Experiments Lecture I
GWDAW 11 - Potsdam, 19/12/ Coincidence analysis between periodic source candidates in C6 and C7 Virgo data C.Palomba (INFN Roma) for the Virgo Collaboration.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
Decision Tree Approach in Data Mining
Sensitivity Analysis In deterministic analysis, single fixed values (typically, mean values) of representative samples or strength parameters or slope.
Stats 95.
Data preprocessing before classification In Kennedy et al.: “Solving data mining problems”
CHE 185 – PROCESS CONTROL AND DYNAMICS
ESTIMATING THE 100-YEAR FLOOD FORDECORAH USING THE RAINFALL EVENTS OF THE RIVER'S CATCHMENT By Kai TsurutaFaculty Advisor: Richard Bernatz Abstract:This.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Cost-Sensitive Classifier Evaluation Robert Holte Computing Science Dept. University of Alberta Co-author Chris Drummond IIT, National Research Council,
CAP and ROC curves.
ROC Curves.
1 Simulation Modeling and Analysis Verification and Validation.
Searching for pulsars using the Hough transform Badri Krishnan AEI, Golm (for the pulsar group) LSC meeting, Hanford November 2003 LIGO-G Z.
ROC Curve and Classification Matrix for Binary Choice Professor Thomas B. Fomby Department of Economics SMU Dallas, TX February, 2015.
Autocorrelation Lecture 18 Lecture 18.
Ranga Rodrigo April 5, 2014 Most of the sides are from the Matlab tutorial. 1.
Computer Vision Lecture 8 Performance Evaluation.
AM Recitation 2/10/11.
Chapter 8 Introduction to Hypothesis Testing
April 24, 2007 Nihat Cubukcu Utilization of Numerical Weather Forecast in Energy Sector.
Chapter 8 Introduction to Hypothesis Testing
Performance measurement. Must be careful what performance metric we use For example, say we have a NN classifier with 1 output unit, and we code ‘1 =
Analyzing and Interpreting Quantitative Data
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
Chapter 4: Pattern Recognition. Classification is a process that assigns a label to an object according to some representation of the object’s properties.
MODES-650 Advanced System Simulation Presented by Olgun Karademirci VERIFICATION AND VALIDATION OF SIMULATION MODELS.
Computational Intelligence: Methods and Applications Lecture 16 Model evaluation and ROC Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Sample Size Determination Text, Section 3-7, pg. 101 FAQ in designed experiments (what’s the number of replicates to run?) Answer depends on lots of things;
Issues concerning the interpretation of statistical significance tests.
Statistical Process Control. A process can be described as a transformation of set of inputs into desired outputs. Inputs PROCESSOutputs What is a process?
Classification Evaluation. Estimating Future Accuracy Given available data, how can we reliably predict accuracy on future, unseen data? Three basic approaches.
1 BA 555 Practical Business Analysis Linear Programming (LP) Sensitivity Analysis Simulation Agenda.
1 Performance Measures for Machine Learning. 2 Performance Measures Accuracy Weighted (Cost-Sensitive) Accuracy Lift Precision/Recall –F –Break Even Point.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Modeling and Simulation CS 313
Lecture 1.31 Criteria for optimal reception of radio signals.
Ground Water Modeling Concepts
CHAPTER 2 Modeling Distributions of Data
Searching for pulsars using the Hough transform
Target for Today Know what can go wrong with a survey and simulation
Modeling and Simulation CS 313
Change in Flood Risk across Canada under Changing Climate
Verifying and interpreting ensemble products
Information Units of Measurement
Chapter 10 Verification and Validation of Simulation Models
Features & Decision regions
How to communicate science clearly
Chapter 2 Table of Contents Section 1 Scientific Method
Statistical Process Control
Probabilistic forecasts
network of simple neuron-like computing elements
Pejman Mohammadi, Niko Beerenwinkel, Yaakov Benenson  Cell Systems 
Computational Intelligence: Methods and Applications
Robert Gilmore Pontius Jr Geography Professor at Clark University
Signal detection theory
Interpreting Epidemiologic Results.
Roc curves By Vittoria Cozza, matr
Albert V. van den Berg, Jaap A. Beintema  Neuron 
Volume 16, Issue 20, Pages (October 2006)
Xing Hua, Haiming Xu, Yaning Yang, Jun Zhu, Pengyuan Liu, Yan Lu 
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

Best practices to evaluate land change models Robert Gilmore Pontius Jr Clark University, USA 1

Researchers should consider … 1.uncertainties that can derive from questionable data quality, unclear boundary conditions, inappropriate model structure, and non-stationarity processes. 2.sensitivity analysis that examines the variation in model output due to specific amounts of variation in model input, parameter values, or structure. 3.pattern validation that compares model outputs with observed outcomes showing misses, hits, wrong hits, false alarms, and correct rejections. 4.structural validation to consider the consistency between real world processes and the processes that the model portrays. 2

Three Base Maps 3

Map of Accuracy at 32 m Resolution 4

Bar of Accuracy at 32 m Resolution 5 Simulated Change Reference Change

Figure of Merit is 21% 6 Numerator Denominator

7 Pontius and Si Figure 1 We want to know whether the black patches of gain of disturbance in the upper image tend to occur on the higher ranking darker pixels in the lower image. Red is eliminated because red is not a candidate for gain of disturbance.

8 Pontius and Si Figure 2 The TOC plot above shows information to show all four entries in the 2-by-2 matrix for each point. The TOC plot avoids ratios. The ROC plot fails to show sufficient information to produce the entries in the 2-by-2 matrix.

9 Pontius and Si Figure 3 The TOC plot is a transformation of the ROC but the TOC has one more bit of information, which is the reference prevalence. So TOC has all the properties of ROC and more. TOC can compute the Area Under the ROC curve, which is a popular measure. If you have an ROC from a clinical trial, then an estimate of the prevalence will allow you to construct an estimate of the TOC.