Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hierarchical models. Hierarchical with respect to Response being modeled – Outliers – Zeros Parameters in the model – Trends (Us) – Interactions (Bs)

Similar presentations


Presentation on theme: "Hierarchical models. Hierarchical with respect to Response being modeled – Outliers – Zeros Parameters in the model – Trends (Us) – Interactions (Bs)"— Presentation transcript:

1 Hierarchical models

2 Hierarchical with respect to Response being modeled – Outliers – Zeros Parameters in the model – Trends (Us) – Interactions (Bs) – Variances (R, Q)

3 Models for outliers

4 Ecological process variance may be asymmetric What’s the upper bound on positive values of population size between t and t + 1? What’s the lower bound on negative values of population size between t and t + 1? Many asset return models also have fatter tails than a normal distribution

5 1. Use non-normal error Student t-distribution is one alternative – Can be written as mixture of normal distributions In finance (Harvey et al. 1994) plot via wikipedia

6 2. Use mixture distribution model Involves specifying the distribution yourself Distribution is composite of 2 or more components Several ways to do this

7 Example 1: catastrophes Time series of pinniped pup counts from Ward et al. 2008

8 Model year as catastrophe or not f() represent normal distribution with unique means and variances I represents indicator function (1 = normal, 0 = catastrophe)

9 Use categorical sampler in JAGS Categorical model in JAGS: p[1] ~ dunif(0,1); p[2] <- 1-p[1]; isCat ~ dcat(p[1:2]) Estimate mean and variance of process variations in each year – Constrain catastrophe variance > regular variability

10 This model doesn’t have an autoregressive property But easy to implement one Instead of probabilities of catastrophic year as constant, we could estimate Pr(catastrophe in year t | not in t-1) Pr(not a catastrophe in year t | catastrophe in t-1)

11 Why to include autoregressive process? Is it biologically supported? Improved predictions in future – If we estimate ‘state’ at time t (0,1), both aren’t equally likely at time t+1

12 Example 2: extreme fisheries catches Thorson et al. 2011 – “Extreme catch events” Some schooling / shoaling species caught in huge aggregations

13 Slightly different formulation f() represent normal distribution with unique means and variances (1-p) represents contribution of extreme component

14

15

16 Hierarchical model for 0s Common approaches to dealing with 0s – Add small value (arbitrary) – Use negative binomial or Poisson distribution – Use 2-step distribution (delta-GLM)

17 1. Use a tweedie distribution Mixture of Gamma – Poisson – Number of mixture components random

18

19 2. Implement mixture model 2 component mixture Like the mixture model for catastrophes For continuous data:

20 For count data Applications: mark-recapture, sighting histories, presence-absence, etc. Zeros are possible (Bernoulli, Poisson, Neg Bin, etc) z = indicator function Kery & Schaub 2012, Royle et al. 2005, etc.

21 Hierarchical with respect to Response being modeled – Outliers – Zeros Parameters in the model – Trends (Us) – Interactions (Bs) – Variances (R, Q)

22 Hierarchical models Multivariate time series with MARSS – We’ve already fit models models that can be considered hierarchical – shared parameters across time series as fixed effects Z = (1, 2, 1, 1, 2, 3) U = “equal” R and Q = “diagonal and equal”

23 Random effects Assumes shared distribution of ‘population’ of trends Upside / downside: increased complexity

24 Comparison of fixed versus random Same trend model to harbor seals we discussed last week (10 time series) R = diagonal and equal Q = diagonal and equal In Bayesian model, compare inference from fixed vs random model for U

25 Model file (from last week) jagsscript = cat(" model { # Populations are independent, so the Q matrix is a diagonal. We'll assume # B = 1, and there's no scaling (A) parameters because each time series = 1 state. # Unlike MARSS, we can model the trends (U) as random effects - meaning we'll estimate # a shared mean and sd across populations, as well as the deviations from that Umu ~ dnorm(0,1); Usig ~ dunif(0,100); Utau <- 1/(Usig*Usig); for(i in 1:nSites) { tauQ[i]~dgamma(0.001,0.001); Q[i] <- 1/tauQ[i]; U[i] ~ dnorm(Umu,Utau); # For fixed effects, U[i] ~ dnorm(0,0.01); } # Estimate the initial state vector of population abundances for(i in 1:nSites) { X[1,i] ~ dnorm(3,0.01); # vague normal prior } # Autoregressive process for remaining years for(i in 2:nYears) { for(j in 1:nSites) { predX[i,j] <- X[i-1,j] + U[j]; X[i,j] ~ dnorm(predX[i,j], tauQ[j]); } # Observation model tauR ~ dgamma(0.001,0.001); for(i in 1:nYears) { for(j in 1:nSites) { Y[i,j] ~ dnorm(X[i,j],tauR); } ",file="normal_independent.txt")

26 Posterior estimates FixedRandom Popsd(U)CV(U)sd(U)CV(U) 10.040.710.020.34 20.061.550.020.41 30.060.880.020.38 40.040.570.020.31 50.030.760.020.32 60.050.850.020.30 70.08226.450.020.48 80.040.810.020.33 90.031.320.010.40 100.040.440.020.29

27 Random effects on other parameters Coefficients for covariates – Temperature – Ocean acidification – Contaminants – Species interactions (e.g. shared across systems) x0 (initial states) – Is there a common initial state amongst populations?

28

29 Random effects on variances More difficult because – Variances not ~ normally distributed – Constrained to be > 0 Normal random effects in log-space Non-normal distribution

30 DLM parameters also can be treated hierarchically From week 6 lab we modeled level and covariate effect as potentially time varying No real hierarchical structure here because we focused on each in isolation

31 Species with correlated dynamics

32 Compare Univariate DLMs fit to each time series Hierarchical DLM with shared / correlated level terms Is there a shared regime change / trend

33 The code (also see comment box) jagsscript = cat(" model { # time varying level parameter tauQ ~ dgamma(0.001,0.001); tauR ~ dgamma(0.001,0.001); sigmaQ <- 1/sqrt(tauQ); sigmaR <- 1/sqrt(tauR); alpha[1] ~ dnorm(0,0.01); for(i in 2:N) { alpha[i] ~ dnorm(alpha[i-1],tauQ); } for(i in 1:N) { Y[i] ~ dnorm(alpha[i], tauR); } ",file="univariateDLM.txt") model.loc=("univariateDLM.txt")

34 Univariate DLM

35 Diagnostics

36 Code for multivariate DLM jagsscript = cat(" model { # time varying level parameter priorQ[1,1] <- 1; priorQ[2,2] <- 1; priorQ[1,2] <- 0; priorQ[2,1] <- 0; tauQ ~ dwish(priorQ, 2); tauR ~ dgamma(0.001,0.001); sigmaQ <- inverse(tauQ[1:2,1:2]); sigmaR <- 1/sqrt(tauR); for(pop in 1:2) { alpha[1,pop] ~ dnorm(0,0.01); } for(i in 2:N) { alpha[i,1:2] ~ dmnorm(alpha[i-1,1:2],tauQ); } for(i in 1:N) { Y[i,1] ~ dnorm(alpha[i,1], tauR); Y[i,2] ~ dnorm(alpha[i,2], tauR); } ",file="DLMcorrelated.txt")

37 Fits still great – how to compare univariate v multivariate?

38 High uncertainty with missing data

39

40 Summary Treating response variable hierarchically increases flexibility Modeling parameters hierarchically – Can lead to improved precision of estimates – Increased complexity can lead to better predictions – But these approaches require lots of data


Download ppt "Hierarchical models. Hierarchical with respect to Response being modeled – Outliers – Zeros Parameters in the model – Trends (Us) – Interactions (Bs)"

Similar presentations


Ads by Google