Download presentation
Presentation is loading. Please wait.
1
Robert Plant != Richard Plant
2
Direct or Remotely sensed
May be the same data Covariates Direct or Remotely sensed Predictors Remotely sensed Field Data Response, coordinates Sample Data Response, covariates Qualify, Prep Qualify, Prep Qualify, Prep Random split? Randomness Inputs Test Data Training Data Outputs Temp Data Processes Build Model Repeated Over and Over The Model Statistics Validate Predict Randomness Predicted Values Uncertainty Maps Summarize Predictive Map
3
Cross-Validation Split the data into training (build model) and test (validate) data sets Leave-p-out cross-validation Validate on p samples, train on remainder Repeated for all combinations of p Non-exhaustive cross-validation Leave-p-out cross-validation but only on a subset of possible combinations Randomly splitting into 30% test and 70% training is common
4
K-fold Cross Validation
Break the data into K sections Test on ๐พ ๐ , Training remainder Repeat for all ๐พ ๐ 10-fold is common Test 1 2 3 4 Used in rpart() 5 Training 6 7 8 9 10
5
Bootstrapping Drawing N samples from the sample data (with replacement) Building the model Repeating the process over and over
6
Random Forest N samples drawn from the data with replacement
Repeated to create many trees A โrandom forestโ โSplitsโ are selected based on the most common splits in all the trees Bootstrap aggregation or โBaggingโ
7
Boosting Can a set of weak learners create a single strong learner? (Wikipedia) Lots of โsimpleโ trees used to create a really complex tree "convex potential boosters cannot withstand random classification noise,โ 2008 Phillip Long (at Google) and Rocco A. Servedio (Columbia University)
8
Boosted Regression Trees
BRTs combine thousands of trees to reduce deviance from the data Currently popular More on this later
9
Sensitivity Testing Injecting small amounts of โnoiseโ into our data to see the effect on the model parameters. Plant The same approach can be used to model the impact of uncertainty on our model outputs and to make uncertainty maps Note: This is not the same as sensitivity testing for model parameters
10
Jackknifing Trying all combinations of covariates
11
Extrapolation vs. Prediction
From model Modeling: Creating a model that allows us to estimate values between data Extrapolation: Using existing data to estimate values outside the range of our data
12
Building Models Selecting the method
Selecting the predictors (โModel Selectionโ) Optimizing the coefficients/parameters of the model
13
Direct or Remotely sensed
May be the same data Covariates Direct or Remotely sensed Predictors Remotely sensed Field Data Response, coordinates Sample Data Response, covariates Qualify, Prep Qualify, Prep Qualify, Prep Random split? Randomness Inputs Test Data Training Data Outputs Temp Data Processes Build Model Repeated Over and Over The Model Statistics Validate Predict Randomness Predicted Values Uncertainty Maps Summarize Predictive Map
14
Model Selection Need a method to select the โbestโ set of predictors
Really to select the best method, predictors, and coefficients (parameters) Should be a balance between fitting the data and simplicity R2 โ only considers fit to data (but linear regression is pretty simple)
15
Simplicity Everything should be made as simple as possible, but not simpler. Albert Einstein "Albert Einstein Head" by Photograph by Oren Jack Turner, Princeton, licensed through Wikipedia
16
Parsimony โโฆtoo few parameters and the model will be so unrealistic as to make prediction unreliable, but too many parameters and the model will be so specific to the particular data set so to make prediction unreliable.โ Edwards, A. W. F. (2001). Occamโs bonus. p. 128โ139; in Zellner, A., Keuzenkamp, H. A., and McAleer, M. Simplicity, inference and modelling. Cambridge University Press, Cambridge, UK.
17
Parsimony Under fitting model structure โฆincluded in the residuals
Over fitting residual variation is included as if it were structural Parsimony Anderson
18
Akaike Information Criterion
AIC K = number of estimated parameters in the model L = Maximized likelihood function for the estimated model ๐ด๐ผ๐ถ=2๐ โ2 lnโก(๐ฟ)
19
AIC Only a relative meaning Smaller is โbetterโ
Balance between complexity: Over fitting or modeling the errors Too many parameters And bias Under fitting or the model is missing part of the phenomenon we are trying to model Too few parameters
20
Likelihood Likelihood of a set of parameter values given some observed data=probability of observed data given parameter values Definitions ๐ฅ= all sample values ๐ฅ ๐ = one sample value ฮธ= set of parameters ๐ ๐ฅ ๐ =probability of x, given ฮธ See: ftp://statgen.ncsu.edu/pub/thorne/molevoclass/pruning2013cme.pdf
21
Likelihood
22
-2 Times Log Likelihood
23
p(x) for a fair coin ๐ด๐ผ๐ถ=2๐โ2 ln ๐ฟ ๐ฟ=๐ ๐ฅ 1 ๐ โ๐ ๐ฅ 2 ๐ โฆ 0.5 Heads
๐ฟ=๐ ๐ฅ 1 ๐ โ๐ ๐ฅ 2 ๐ โฆ 0.5 Heads Tails What happens as we flip a โfairโ coin?
24
p(x) for an unfair coin ๐ด๐ผ๐ถ=2๐โ2 ln ๐ฟ ๐ฟ=๐ ๐ฅ 1 ๐ โ๐ ๐ฅ 2 ๐ โฆ 0.8 Heads
๐ฟ=๐ ๐ฅ 1 ๐ โ๐ ๐ฅ 2 ๐ โฆ 0.8 Heads 0.2 Tails What happens as we flip a โfairโ coin?
25
p(x) for a coin with two heads
๐ด๐ผ๐ถ=2๐โ2 ln ๐ฟ ๐ฟ=๐ ๐ฅ 1 ๐ โ๐ ๐ฅ 2 ๐ โฆ 1.0 Heads 0.0 Tails What happens as we flip a โfairโ coin?
26
Does likelihood from p(x) work?
if the likelihood is the probability of the data given the parameters, and a response function provides the probability of a piece of data (i.e. probability that this is suitable habitat) we can use the probability that a specific occurrence is suitable as the p(x|Parameters) Thus the likelihood of a habitat model (while disregarding bias) Can be computed by L(ParameterValues|Data)=p(Data1|ParameterValues)*p(Data2|ParameterValues)... Does not work, the highest likelihood will be to have a model with 1.0 everywhere, have to divide the model by itโs area so the area under the model = 1.0 Remember: This only works when comparing the same dataset!
27
Akaikeโฆ Akaike showed that: Which is equivalent to:
log โ ๐ ๐๐๐ก๐ โ๐พโ๐ธ ๐ฆ ๐ธ ๐ฅ log ๐ ๐ฅ| ๐ (๐ฆ) Which is equivalent to: log โ ๐ ๐๐๐ก๐ โ๐พ=๐๐๐๐ ๐ก๐๐๐กโ ๐ธ ๐ I ๐, ๐ Akaike then defined: AIC = โ2log โ ๐ ๐๐๐ก๐ +2๐พ
28
AICc Additional penalty for more parameters ๐ด๐ผ๐ถ๐=๐ด๐ผ๐ถ+ 2๐(๐+1) ๐โ๐โ1
Recommended when n is small or k is large
29
๐ต๐ผ๐ถ=2๐โ๐๐(๐) โ2 lnโก(๐ฟ) BIC Bayesian Information Criterion
Adds n (number of samples) ๐ต๐ผ๐ถ=2๐โ๐๐(๐) โ2 lnโก(๐ฟ)
30
Extra slides
31
Discrete: Continuous: Justification: ๐ท ๐พ๐ฟ = lnโก( ๐ ๐ ๐ ๐ )๐(๐)
๐ท ๐พ๐ฟ (๐| ๐ =โ ๐ ๐ฅ logโก(๐ ๐ฅ + ๐ ๐ฅ logโก(๐(๐ฅ)
32
The distance can also be expressed as:
๐ผ ๐,๐ = ๐ ๐ฅ ๐๐๐ ๐ ๐ฅ ๐๐ฅโ ๐ ๐ฅ ๐๐๐ ๐ ๐ฅ ๐ ๐๐ฅ ๐ ๐ฅ is the expectation of ๐ ๐ฅ so: ๐ผ ๐,๐ = ๐ธ ๐ log ๐ ๐ฅ โ ๐ธ ๐ log ๐ ๐ฅ ๐ Treating ๐ธ ๐ log ๐ ๐ฅ as an unknown constant: ๐ผ ๐,๐ โ๐ถ= ๐ธ ๐ log ๐ ๐ฅ ๐ = Relative Distance between g and f
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.