Modelling in Physical Geography Martin Mergili, University of Vienna

Slides:



Advertisements
Similar presentations
Validity and Reliability of Analytical Tests. Analytical Tests include both: Screening Tests Diagnostic Tests.
Advertisements

Evaluating Classifiers
Learning Algorithm Evaluation
Evaluation of segmentation. Example Reference standard & segmentation.
Curva ROC figuras esquemáticas Curva ROC figuras esquemáticas Prof. Ivan Balducci FOSJC / Unesp.
Cost-Sensitive Classifier Evaluation Robert Holte Computing Science Dept. University of Alberta Co-author Chris Drummond IIT, National Research Council,
CAP and ROC curves.
Supervised classification performance (prediction) assessment Dr. Huiru Zheng Dr. Franscisco Azuaje School of Computing and Mathematics Faculty of Engineering.
Decision Theory Naïve Bayes ROC Curves
1 The Expected Performance Curve Samy Bengio, Johnny Mariéthoz, Mikaela Keller MI – 25. oktober 2007 Kresten Toftgaard Andersen.
1 Simulation Modeling and Analysis Verification and Validation.
Testing an individual module
Jeremy Wyatt Thanks to Gavin Brown
CSCI 347 / CS 4206: Data Mining Module 06: Evaluation Topic 07: Cost-Sensitive Measures.
BASIC STATISTICS: AN OXYMORON? (With a little EPI thrown in…) URVASHI VAID MD, MS AUG 2012.
Repository Method to suit different investment strategies Alma Lilia Garcia & Edward Tsang.
Classification Performance Evaluation. How do you know that you have a good classifier? Is a feature contributing to overall performance? Is classifier.
CT image testing. What is a CT image? CT= computed tomography CT= computed tomography Examines a person in “slices” Examines a person in “slices” Creates.
Evaluating Results of Learning Blaž Zupan
Machine Learning Tutorial-2. Recall, Precision, F-measure, Accuracy Ch. 5.
How Good is a Model? How much information does AIC give us? –Model 1: 3124 –Model 2: 2932 –Model 3: 2968 –Model 4: 3204 –Model 5: 5436.
ECE 471/571 - Lecture 19 Review 11/12/15. A Roadmap 2 Pattern Classification Statistical ApproachNon-Statistical Approach SupervisedUnsupervised Basic.
Knowledge and Information Retrieval Dr Nicholas Gibbins 32/4037.
University of Essex Alma Lilia García Almanza Edward P.K. Tsang Simplifying Decision Trees Learned by Genetic Programming.
7. Performance Measurement
Modelling of landslide release
Regression Analysis: Estimating Relationships
Modelling in Physical Geography Martin Mergili, University of Vienna
ESTIMATION.
Evaluation – next steps
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Linear Regression.
CSI5388: A Critique of our Evaluation Practices in Machine Learning
Introduction to Machine Learning
Performance Evaluation 02/15/17
How Good is a Model? How much information does AIC give us?
Evaluating Results of Learning
Trees Nodes Is Temp>30? False True Temp<=30° Temp>30°
From: Systematic Reviews of Diagnostic Test Accuracy
Evaluating Classifiers
Chapter 7 – K-Nearest-Neighbor
Measuring Success in Prediction
بسم الله الرحمن الرحيم Clinical Epidemiology
Regression Analysis: Estimating Relationships
Machine Learning Week 10.
Data Mining Classification: Alternative Techniques
Features & Decision regions
Software Reliability Models.
Chapter 9 Hypothesis Testing.
6-1 Introduction To Empirical Models
Colocalization of GWAS and eQTL Signals Detects Target Genes
Learning Algorithm Evaluation
Parametric Estimation
Patricia Butterfield & Naomi Chaytor October 18th, 2017
Getting Started BPS 7e Chapter 0 © 2014 W. H. Freeman and Company.
ROC Curves and Operating Points
Model Evaluation and Selection
Evaluating Models Part 1
Evaluation and Its Methods
Roc curves By Vittoria Cozza, matr
Biomarkers: A Basic Glossary
Assignment 1: Classification by K Nearest Neighbors (KNN) technique
MECH 3550 : Simulation & Visualization
Machine Learning: Methodology Chapter
Evaluation and Its Methods
Colocalization of GWAS and eQTL Signals Detects Target Genes
More on Maxent Env. Variable importance:
ECE – Pattern Recognition Lecture 8 – Performance Evaluation
ROC Curves and Operating Points
Presentation transcript:

Modelling in Physical Geography Martin Mergili, University of Vienna Model validation Martin Mergili, University of Vienna

Model validation It is easy with GIS to impress people with nice colourful maps produced with computer models However, we have learned that model parameters are uncertain and also the process understanding is often limited These uncertainties reflect themselves in the model results Model results have to be evaluated against observations to test their validity Limitations Inventory Strategies Instructions 2 2

Landslide inventories In the field of landslide modelling, landslide inventories are most useful for model validation Landslide inventories are freely available for some countries and regions Good landslide inventories are a lot of work to prepare Release and deposition areas should be represented by separate polygons The availability of a landslide inventory is a key requirement for landslide modelling efforts Limitations Inventory Strategies Instructions 3 3

Model development and validation areas If the landslide inventory is used for model development (statistical models) it is always necessary to clearly separate between the model development area and the model validation area In such a case, many model runs with different development and validation areas may be performed in order to achieve robust results This separation is not necessary for physically- based models so that the entire study area may be used for model validation. Limitations Inventory Strategies Instructions 4 4

Simple overlay This only works with binary results Observed landslide area TN = True negative prediction TP = True positive prediction FP = False positive prediction FN = False negative prediction Modelled landslide area (FOS < 1) FP True positive rate rTP = TP / (TP + FN) False positive rate rFP = FP / (FP + TN) TN FN TP FN This only works with binary results FP Limitations Inventory Strategies Instructions 5 5

AUCROC in range 0–1 is indicator for prediction quality ROC Plot 1 Observed landslide area Modelled landslide probability 0.0 rTP AUCROC = 0.5 (random prediction) 0.3 AUCROC 0.6 0.0 0.6 rFP 1 0.3 AUCROC in range 0–1 is indicator for prediction quality True positive rate rTP = TP / (TP + FN) False positive rate rFP = FP / (FP + TN) Limitations Inventory Strategies Instructions 6 6

Instructions We will use an ROC Plot to evaluate our computed slope failure probability map against the landslide inventory for the Sant‘Anna test area. This ROC Plot will be created with the statistical software R. The corresponding R script will be provided. This script will be called directly from the Python script so that the validation procedure is directly included in the work flow of our model application. We will run our model application with varying ranges of the input parameters in order to explore how sensitive the prediction quality reacts. Limitations Inventory Strategies Instructions 7 7

Instructions Python 2.7 documentation: https://docs.python.org/2.7/ ArcPy documentation: https://desktop.arcgis.com/en/desktop/latest/analyze/arcpy/a-quick-tour-of-arcpy.htm R documentation: https://www.r-project.org/ Limitations Inventory Strategies Instructions 8 8

Have fun! martin.mergili@univie.ac.at http://www.mergili.at 9 9