Download presentation
Presentation is loading. Please wait.
1
On the Interpolation Algorithm Ranking
10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay
2
What is algorithm ranking?
There exist many interpolation algorithms Which is the best? Is there a general answer? Is there an answer for my particular dataset? How to define the better-than relation between two given methods? How confident should I be regarding such answer?
3
What has been done? Many papers so far Permanent interest
How is a typical paper? Takes a dataset as an example N points sampled somewhere Subdivide N in two sets: Training Set {A} and Test Set {B} A∩B=Ø; N=#{A}+#{B} Repeat for all available algorithms: Define interpolant using {A}; blindly interpolate at locations of {B} Compare known values at {B} with those interpolated ones Compare? Typically through RMSE/MAD Better-Than is equivalent to lower-RMSE
4
Is RMSE/MAD/etc. suitable as a metric?
Different interpolation algorithms lead to different look RMSE might not be representative. Why? Let’s consider spectral properties Images from
5
Some spectral metric of agreement
For example, ESAM metric U=fft2d(measured error field), U(i,j)≥0 V=fft2d(interpolated error field), V(i,j)≥0 ideally, U=V 0≤ESAM(U,V)≤1 ESAM(W,W)=1 Hint!: There might be better options than ESAM
6
How confident should I be regarding such answer?
Given {A} and {B}a deterministic answer How to attach a confidence level? Or just some uncertainty? Perform Cross Validation (Falivene et al., 2010) Set #{B}=1, and leave the rest with {A} N possible choices (events) to select B Evaluate RMSE for each method and event Average for each method over N cases Better-than is now Average-run-better-than Simulate Sample {A} from N, #{A}=m, m<N Evaluate RMSE for each method and event, and create rank(i) Select confidence level, and apply Friedman’s Test to all rank(i) n wines judges each rank k different wines
7
The experiment DEM of Montagne Sainte Victoire (France)
Sample {B}, 20 points, held fixed Do 250 times: Sample {A} points Apply six algorithms Evaluate RMSE, MAD, ESAM, etc. Evaluate ranking(i) Evaluate ranking of means over i Apply Friedman’s Test and compare
8
Results Ranking using mean of simulated values might be different from Friedman’s test Ranking using spectral properties might disagree with that of RMSE/MAD Friedman’s Test has a sound statistical basis Spectral properties of the interpolated field might be important for some applications
9
Thank you! Questions? sdfsfsfdsfsdafsaf
Introduction to ArcGIS I (for ArcView 8, ArcEditor 8, and ArcInfo 8) 1-9
11
Results Other results, valid for this particular dataset
Ranking using ESAM varies with #{A} According to ESAM criteria, Inverse Distance Weighting (IDW) quality degrades as #{A} increases According to RMSE criteria, IDW is the best With a significative difference w.r.t. the second With 95% confidence level Irrespective of #{A} According to ESAM criteria, IDW is NOT the best
12
Other possible spectral metrics (to be developed)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.