Download presentation
Presentation is loading. Please wait.
Published byDwayne Dalton Modified over 9 years ago
1
Adaptive management AM is about learning to manage DYNAMIC systems more effectively There are two kinds of AM: –Passive (certainty equivalent): assumes statistical estimation performance is independent of policy choice –Active (dual effect of control): assumes estimation performance depends on policy (policy “probes for opportunity”)
2
Apparently some people never learn: the sad history of the Cheasapeake Bay oyster fishery (from Rothschild et al MEPS 1994)
3
History of AM Ecosystem modeling workshops, 1970 AEAM workshop process, 1972-74 Dual control problem (experiments), 1976 Many case studies using AEAM workshop process 1976-2000 Split in AM definition (experimentation versus consensus building) 1990 Recognition of very high failure rates for case studies and the IBM debate (modeling vs experimentation), 1997
4
The prototype adaptive management problem: Fraser River sockeye salmon Limited range of historical experience Rationalizer model that “explains” past management ( ƞ 1 ) Possible opportunity ( ƞ 2 ) for improvement, need “probing” management experiment to test From Walters and Hilborn 1976
5
Decision tables for stock rebuilding experiments Rationalizer model correct (Ricker) Opportunity model correct (Beverton-Holt) Maintain current management policy Modest harvest value maintained Conduct probing experiment Loss during experiment, followed by modest value when experiment ended Initial loss followed by long term gain in value if experiment done for long enough
6
Surprise and opportunity: unexpected results from the Fraser sockeye experiment But these effects are not seen in stocks for which escapement has not increased, i.e. effects are not due to shared environmental factor(s):
7
Other AM examples Hatchery impact on wild coho stocks (alternating hatchery releases) Restoration of endangered humpback chub, Grand Canyon (exotic predator control, warm water) Impacts of line fishing on Great Barrier Reef fish communities (rotating openings and closures)
8
Need for actively adaptive policies Learning rates generally do depend on policy choice, since responses are generally regression relationships Not all “probing” policies are worth testing policy response
9
Experimental design choices in AM Only BA (before-after) comparisons are available for large, unique systems For spatially structured systems, “pilot experiments” can be used on representative local areas –CI (control-impact) comparisons assume control sites are good predictors of how impacted sites would have behaved –BACI (before-after control-impact) comparisons let us control for time effects that may affect all sites
10
BACI design treats Control sites as models for how Impacted sites would have behaved if not treated Time Time series measure Treated site data Control site data Predicted treated site data
11
There is no scientific way to say with certainty how any treated system would have behaved if it had not been treated (we can never both treat and not treat a system at the same time). We can only gain reassurance that an apparent response was not caused by something besides treatment by repeating the treatment over and over and looking for similarities in responses; that is what is meant by “replication” (replicates are NOT identical experimental units).
12
Dynamic systems are nasty BA experiments ALWAYS lead to debate about effect of treatment versus other changes observed after treatment CI comparisons fail when there is selection bias (eg marine protected areas vs nearby fished areas), and/or divergent natural behavior BACI designs do not control for time- treatment interactions
13
Confounding of effects in before-after comparisons: nature does not give unambiguous contrasts among causal factors Native fish abundances in the Grand Canyon have increased dramatically since 2003. Was this caused by “mechanical removal” of predatory trout, or by increases in water temperature?
14
Time-treatment interactions If a treated experimental unit shows some undesired response compared to control, should we assume the response was due to treatment? Proponent of the policy represented by the experimental treatment can simply argue that treated units respond differently to temporal forcing factors than do untreated units time response C T
15
Staircase experimental designs Instead of comparing treated to untreated units, compare units treated at different starting times Does the “shape” of the treatment response change over time, i.e. does it depend on the time when treatment started? time response C T
16
Staircase experimental designs General Linear Model (GLM) approach to analysis: Y it =μ i + T t + R t-ti + e it Unit effect Error Time since treatment effect Time effect 13245678 12 1 34567 23456 Unit 4 Unit 3 Unit 2 Unit 1 1 2 3 4 5 6 7 8 9 Time (t) Study unit (i) What is the average effect of time since treatment, over possible times of treatment?
17
Adaptive management experiments Expensive because of need to control for complex time dynamic effects Risky because responses may be the opposite of expected Require innovative monitoring approaches (e.g. cooperation with stakeholders, use of technologies with high initial capital cost).
18
Finding optimum adaptive policies For simple models, we can do this with “stochastic dynamic programming” The basic idea is to represent total value from time t into the future, V t, as a sum of two components: V t = v t + V t+1 Want highest value of this Value this year, depends on (1)Stock this year (2) Stock left to breed Future value, depends on (1)Stock left to breed (2) Natural disturbances (3) Information gained about effect of breeding stock on production
19
There is a tradeoff among the value components Stock left to breed this year Value components V t, V t+1 V t+1 (future value) V t (immediate value) Dynamic programming tests each possible breeding stock size to find the best choice, with V t+1 averaged over possible future disturbances
20
Modeling dynamics of learning using Bayes Theorem Where P t-1 (N) is the “probability of the data: Suppose N fish are observed in year t. Then: Note that differences in predicted N among the hypotheses are represented by different P(N|hyp) distributions. In dynamic programming for adaptive management, both N and the probabilities P t (hyp) are treated as dynamic state variables.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.