Download presentation
Presentation is loading. Please wait.
Published byAntony Shepherd Modified over 9 years ago
1
Use of Monte-Carlo particle filters to fit and compare models for the dynamics of wild animal populations Len Thomas Newton Inst., 21 st Nov 2006 I always wanted to be a model….
2
Outline 1. Introduction 2. Basic particle filtering 3. Tricks to make it work in practice 4. Applications –(i) PF, Obs error fixed –(ii) PF vs KF, One colony model –(iii) PF vs MCMC 5. Discussion
3
References Our work: http://www.creem.st-and.ac.uk/len/
4
Joint work with… Methods and framework: –Ken Newman, Steve Buckland: NCSE St Andrews Seal models: –John Harwood, Jason Matthiopoulos: NCSE & Sea Mammal Research Unit –Many others at SMRU Comparison with Kalman filter: –Takis Besbeas, Byron Morgan: NCSE Kent Comparison with MCMC –Carmen Fernández: Univ. Lancaster
5
1. Introduction
6
Answering questions about wildlife systems How many ? Population trends Vital rates What if ? –scenario planning –risk assessment –decision support Survey design –adaptive management
7
State space model State process densityg t (n t |n t-1 ; Θ) Observation process densityf t (y t |n t ; Θ) Initial state densityg 0 (n 0 ; Θ) Bayesian approach, so: Priors on Θ Initial state density + state density gives prior on n 1:T
8
British grey seal Population in recovery from historical exploitation NERC Special Committee on Seals
9
Data Aerial surveys of breeding colonies since 1960s count pups Other data: intensive studies, radio tracking, genetic, counts at haul-outs
10
Pup production estimates
11
Orkney example colonies
12
State process model Life cycle graph representation pup12345 6+ density dependence here… … or here
13
Density dependence e.g. in pup survival Carrying capacity χ r
14
More flexible models of density dependence
15
State process model 4 regions pup12345 6+ North Sea pup12345 6+ Inner Hebrides pup12345 6+ Outer Hebrides pup12345 6+ Orkneys movement depends on distance density dependence site faithfulness
16
SSMs of widllife population dynamics: Summary of Features State vector high dimensional (seal model: 7 x 4 x 22 = 616). Observations only available on a subset of these states (seal model: 1 x 4 x 22 = 88) State process density is a convolution of sub-processes so hard to evaluate. Parameter vector is often quite large (seal model: 11-12). Parameters often partially confounded, and some are poorly informed by the data.
17
Fitting state-space models Analytic approaches –Kalman filter (Gaussian linear model; Besbeas et al.) –Extended Kalman filter (Gaussian nonlinear model – approximate) + other KF variations –Numerical maximization of the likelihood Monte Carlo approximations –Likelihood-based (Geyer; de Valpine) –Bayesian Rejection Sampling Damien Clancy Markov chain Monte Carlo (MCMC; Bob O’Hara, Ruth King) Sequential Importance Sampling (SIS) a.k.a. Monte Carlo particle filtering
18
Inference tasks for time series data Observe data y 1:t = (y 1,...,y t ) We wish to infer the unobserved states n 1:t = (n 1,...,n t ) and parameters Θ Fundamental inference tasks: –Smoothing p(n 1:t, Θ| y 1:t ) –Filtering p(n t, Θ t | y 1:t ) –Prediction p(n t+x | y 1:t ) x>0
19
Filtering Filtering forms the basis for the other inference tasks Filtering is easier than smoothing (and can be very fast) –Filtering recursion: divide and conquor approach that considers each new data point one at a time p(n 0 ) p(n 1 |y 1 ) Only need to integrate over n t, not n 1:t p(n 2 |y 1:2 ) y1y1 y2y2 p(n 3 |y 1:3 ) y3y3 p(n 4 |y 1:4 ) y4y4
20
Monte-Carlo particle filters: online inference for evolving datasets Particle filtering used when fast online methods required to produce updated (filtered) estimates as new data arrives: –Tracking applications in radar, sonar, etc. –Finance Stock prices, exchange rates arrive sequentially. Online update of portfolios. –Medical monitoring Online monitoring of ECG data for sick patients –Digital communications –Speech recognition and processing
21
2. Monte Carlo Particle Filtering Variants/Synonyms: Sequential Monte Carlo methods Sequential Importance Sampling (SIS) Sampling Importance Sampling Resampling (SISR) Bootstrap Filter Interacting Particle Filter Auxiliary Particle Filter
22
Importance sampling Want to make inferences about some function p(), but cannot evaluate it directly Solution: –Sample from another function q() (the importance function) that has the same support as p() (or wider support) –Correct using importance weights
23
Example:
24
Importance sampling algorithm Given p(n t |y 1:t ) and y t+1 want to update to p(n t+1 |y 1:t+1 ), Prediction step: Make K random draws (i.e., simulate K “particles”) from importance function Correction step: Calculate: Normalize weights so that Approximate the target density:
25
Importance sampling: take home message The key to successful importance sampling is finding a proposal q() that: –we can generate random values from –has weights p()/q() that can be evaluated The key to efficient importance sampling is finding a proposal q() that: –we can easily/quickly generate random values from –has weights p()/q() that can be evaluated easily/quickly –is close to the target distribution
26
Sequential importance sampling SIS is just repeated application of importance sampling at each time step Basic sequential importance sampling: –Proposal distribution q() = g(n t+1 |n t ) –Leads to weights To do basic SIS, need to be able to: –Simulate forward from the state process –Evaluate the observation process density (the likelihood)
27
Basic SIS algorithm Generate K “particles” from the prior on {n 0, Θ} and with weights 1/K: For each time period t=1,...,T –For each particle i=1,...,K Prediction step: Correction step:
28
Justification of weights
29
Example of basic SIS State-space model of exponential population growth –State model –Observation model –Priors
30
Example of basic SIS t=1 Obs: 12 0.028 0.012 0.201 0.073 0.038 0.029 0.000 0.012 Predict Correct 11 12 14 13 16 20 14 9 16 Sample from prior 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 n 0 Θ 0 w 0 0.1 17 18 11 15 20 17 7 6 22 Prior at t=1 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 n 1 Θ 0 w 0 17 18 11 15 20 17 7 6 22 Posterior at t=1 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 0.063 0.034 0.558 0.202 0.010 0.063 0.000 0.003 n 1 Θ 1 w 1 gives f()
31
Example of basic SIS t=2 Obs: 14 gives f() 0.160 0.190 0.112 0.008 0.046 0.160 0.011 0.000 0.046 0.007 Predict Correct 17 18 11 15 20 17 7 6 22 Posterior at t=1 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 n 1 Θ 1 w ! 0.063 0.034 0.558 0.202 0.010 0.063 0.000 0.003 0.063 0.034 0.558 0.202 0.010 0.063 0.000 0.003 15 14 12 10 11 15 21 9 11 20 Prior at t=2 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 n 2 Θ 1 w 1 15 14 12 10 11 15 21 9 11 20 Posterior at t=2 1.055 1.107 1.195 0.974 0.936 1.029 1.081 1.201 1.000 0.958 0.105 0.068 0.691 0.015 0.005 0.105 0.007 0.000 n 2 Θ 2 w 2
32
Problem: particle depletion Variance of weights increases with time, until few particles have almost all the weight Results in large Monte Carlo error in approximation Can quantify: From previous example: Time012 ESS10.02.51.8
33
Problem: particle depletion Worse when: –Observation error is small –Lots of data at any one time point –State process has little stochasticity –Priors are diffuse or not congruent with observations –State process model incorrect (e.g., time varying) –Outliers in the data
34
Some intuition In a (basic) PF, we simulate particles from the prior, and gradually focus in on the full posterior by filtering the particles using data from one time period at a time Analogies with MCMC: –In MCMC, we take correlated samples from the posterior. We make proposals that are accepted stochastically. Problem is to find a “good” proposal Limitation is time – has the sampler converged yet? –In PF, we get an importance sample from the posterior. We generate particles from a proposal, that are assigned weights (and other stuff – see later). Problem is to find a “good” proposal Limitation is memory – do we have enough particles? So, for each “trick” in MCMC, there is probably an analogous “trick” in PF (and visa versa)
35
3. Particle filtering “tricks” An advanced randomization technique
36
Tricks: solutions to the problem of particle depletion Pruning: throw out “bad” particles (rejection) Enrichment: boost “good” particles (resampling) –Directed enrichment (auxiliary particle filter) –Mutation (kernel smoothing) Other stuff –Better proposals –Better resampling schemes –…–…
37
Rejection control Idea: throw out particles with low weights Basic algorithm, at time t: –Have a pre-determined threshold, c t, where 0 < c t <=1 –For i = 1, …, K, accept particle i with probability –If particle is accepted, update weight to –Now we have fewer than K samples Can make up samples by sampling from the priors, projecting forward to the current time point and repeating the rejection control
38
Rejection control - discussion Particularly useful at t=1 with diffuse priors Can have a sequence of control points (not necessarily every year) Check points don’t need to be fixed – can trigger when variance of weights gets too high Thresholds, c t, don’t need to be set in advance but can be set adaptively (e.g., mean of weights) Instead of restarting at time t=0, can restart by sampling from particles at previous check point (= partial rejection control)
39
Resampling: pruning and enrichment Idea: allow “good” particles to amplify themselves while killing off “bad” particles Algorithm. Before and/or after each time step (not necessarily every time step) –For j = 1, …, K Sample independently from the set of particles according to the probabilities Assign new weights Reduces particle depletion of states as “children” particles with the same “parent” now evolve independently
40
Resample probabilities Should be related to the weights (as in the bootstrap filter) –α could vary according to the variance of weights –α = ½ has been suggested related to “future trend” – as in auxiliary particle filter
41
Directed resampling: auxiliary particle filter Idea: Pre-select particles likely to have high weights in future Example algorithm. –For j = 1, …, K Sample independently from the set of particles according to the probabilities Predict: Correct: If “future” observations are available can extend to look >1 time step ahead – e.g., protein folding application Can obtain by projecting forward deterministically
42
Kernel smoothing: enrichment of parameters through mutation Idea: Introduce small “mutations” into parameter values when resampling Algorithm: –Given particles –Let V t be the variance matrix of the –For i = 1, …, K Sample where h controls the size of the perturbations –Variance of parameters is now (1+h 2 )V t, so need shrinkage to preserve 1 st 2 moments
43
Kernel smoothing - discussion Previous algorithm does not preserve the relationship between parameters and states –Leads to poor smoothing inference –Possibly unreliable filtered inference? –Pragmatically – use as small a value of h as possible Extensions: –Kernel smooth states as well as parameters –Local kernel smoothing
44
Other “tricks” Reducing dimension: –Rao Blackwellization – integrating out some part of the model Better proposals: –Start with an importance sample (rather than from priors) –Conditional proposals Better resampling: –Residual resamling –Stratified resampling Alternative “mutation” algorithms: –MCMC within PF Gradual focussing on posterior: –Tempering/anneling ……
45
4. Applications
46
(i) Faray example Motivation: Comparison with Kalman Filter (KF) via Integrated Population Modelling methods of Besbeas et al.
47
Example State Process Model: Density dependent emigration pup12345 6+ density dependent emigration τ fixed at 1991
48
Observation Process Model Ψ = CV of observations
49
Priors Parameters: –Informative priors on survival rates from intensive studies (mark-recapture) –Informative priors on fecundity, carrying capacity and observation CV from expert opinion Initial values for states in 1984: –For pups, assume –For other ages: Stable age prior More diffuse prior
50
Fitting the Faray data One colony: relatively low dimension problem So few “tricks” required –Pruning (rejection control) in first time period –Multiple runs of sampler until required accuracy reached (note – ideal for parallelization) –Pruning of final results (to reduce number of particles stored)
51
Results – Smoothed states KF Result SIS Result More diffuse prior
52
Posterior parameter estimates Param 1Param 2 φaφa 0.670.81 φpφp 0.170.49 α0.190.48 ψ0.190.05 β0.230.33 Sensitivity to priors (Method of Millar, 2004) Prior Posterior median Median ML est from KF
53
Results – SIS Stable age prior KF Result SIS Result Stable age prior
54
(ii) Extension to regional model pup12345 6+ North Sea pup12345 6+ Inner Hebrides pup12345 6+ Outer Hebrides pup12345 6+ Orkneys density dependent juvenile survival movement depends on distance density dependence site faithfulness
55
Fitting the regional data Higher dimensional problem (7x4xN.years states; 11 parameters) More “tricks” required for an efficient sampler –Pruning (rejection control) in first time period –Multiple runs with rejection control of final results –Directed enrichment (auxiliary particle filter with kernel smoothing of parameters)
56
Estimated pup production
57
Posterior parameter estimates
58
Predicted adults
59
(iii) Comparison with MCMC Motivation: –Which is more efficient? –Which is more general? –Do the “tricks” used in SIS cause bias? Example applications: –Simulated data for Coho salmon –Grey seal data – 4 region model with movement and density dependent pup survival
60
Summary of findings To be efficient, the MCMC sampler was not at all general We also used an additional “trick” in SIS: integrating out the observation CV parameter. SIS algorithm still quite general however. MCMC was more efficient (lower MC variation per unit CPU time) SIS algorithm was less efficient, but was not significantly biased
61
Update: Kernel smoothing bias KS discount = 0.999999KS discount = 0.997
62
Can’t we discuss this? 5. Discussion I’ll make you fit into my model!!!
63
Modelling framework State-space framework –Can explicitly incorporate knowledge of biology into state process models –Explicitly model sources of uncertainty in the system –Bring together diverse sources of information Bayesian approach –Expert knowledge frequently useful since data is often uninformative –(In theory) can fit models of arbitrary complexity
64
SIS vs KF Like SIS, use of KF and extensions is still an active research topic KF is certainly faster – but is it accurate and flexible enough? May be complementary: –KF could be used for initial model investigation/selection –KF could provide a starting importance sample for a particle filter
65
SIS vs MCMC SIS: –In other fields, widely used for “on-line” problems – where the emphasis is on fast filtered estimates foot and mouth outbreak? N. American West coast salmon harvest openings? –Can the general algorithms be made more efficient? MCMC: –Better for “off-line” problems? – plenty of time to develop and run highly customized, efficient samplers –Are general, efficient samplers possible for this class of problems? Current disadvantages of SIS: –Methods less well developed than for MCMC? –No general software (no WinBUGS equivalent – “WinSIS”)
66
Current / future research SIS: –Efficient general algorithms (and software) –Comparison with MCMC and Kalman filter –Parallelization –Model selection and multi-model inference –Diagnostics Wildlife population models: –Other seal models (random effects, covariates, colony- level analysis, more data…) –Other applications (salmon, sika deer, Canadian seals, killer whales, …)
67
! Just another particle…
68
Inference from different models 1 Assuming N adult males is 0.73*N adult females
69
Model selection
70
Effect of independent estimate of total population size DDS & DDF Models Assumes independent estimate is normally distributed with 15%CV. Calculations based on data from 1984-2004.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.