Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machiavellian Forecasting: do the ends justify the means? Ted Bohn UW CEE UBC/UW 2005 Hydrology and Water Resources Symposium.

Similar presentations


Presentation on theme: "Machiavellian Forecasting: do the ends justify the means? Ted Bohn UW CEE UBC/UW 2005 Hydrology and Water Resources Symposium."— Presentation transcript:

1 Machiavellian Forecasting: do the ends justify the means? Ted Bohn UW CEE UBC/UW 2005 Hydrology and Water Resources Symposium

2 Potential Problem Accurate predictions depend on understanding the physical processes Can we achieve high accuracy while requiring model to only portray well-understood processes? If not, does improved accuracy justify methods that might obscure physical processes?

3 Outline Background - a forecasting problem General approach – multi-model ensemble Comparison of two multi-model approaches

4 Background UW West-Wide Stream Flow Forecast system (Wood et al, 2002) –Long-lead-time (1-6 months) stream flow forecasting for western U.S. –Main component: Variable Infiltration Capacity (VIC) large-scale hydrological model –Probabilistic forecasts End-users include: –Municipal water supply planners (drinking water) –Farmers (irrigation) –Utilities (hydropower) –Environmental agencies (habitat)

5 Experimental W. US Hydrologic Forecast System

6 Background Immediate goal: improve forecast accuracy/precision at long lead times (1-6 months) Problems: –Uncertainty grows with lead time –Greater uncertainty when making forecasts before the snow pack has accumulated

7 Seasonal Hydrologic Forecast Uncertainty in Western US Importance of uncertainty in ICs vs. climate varies with lead time … Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Forecast Uncertainty actual perfect data, model streamflow volume forecast period model + data uncertainty low high IC error low climate fcst error high IC error high climate fcst error low … hence importance of model & data errors also vary with lead time.

8 How to reduce uncertainty? Multi-model ensemble –Average the results of multiple models –Ensemble mean should be more stable than a single model –Combines the strengths of each model –Provides estimates of forecast uncertainty

9 ESP ENSO/PDO ENSO CPC Official Outlooks Coupled Forecast System CAS OCN SMLR CCA CA NSIPP/GMAO dynamical model VIC Hydrolog y Model NOAA NASA UW Seasonal Climate Forecast Data Sources Expansion to multiple-model framework

10 ESP ENSO/PDO ENSO CPC Official Outlooks Coupled Forecast System CAS OCN SMLR CCA CA NSIPP/GMAO dynamical model Model 2 NOAA NASA UW Multiple Hydrologic Models Model 1 Model 3 weightings calibrated via retrospective analysis

11 Computing Model Weights Bayesian Model Averaging (BMA) (Raftery et al, 2005) Ensemble mean forecast = Σw k f k where f k = result of k th model w k = weight of k th model, related to model’s correlation with observations during training

12 Computing Model Weights Assumption: model results have a Gaussian distribution Western U.S. – many streams have 3- parameter log-normal (LN-3) distributions Each month may have a distinct distribution We need to transform flows to Gaussian domain before computing weights

13 Computing Model Weights Each model, each month, will have a different transformation, including a bias correction This destroys the month-to-month shape of the simulated stream flow Alternatively, try a simple 2-parameter log- normal transformation (LN-2), same for all months and models We have examined both methods

14 Test Case: Salmon River

15 UW West-Wide Forecast Ensemble Models: VIC - Variable Infiltration Capacity (UW) SAC - Sacramento/SNOW17 model (National Weather Service) NOAH – NCEP, OSU, Army, and NWS Hydrology Lab ModelEnergy BalanceSnow Bands VICYesYes SACNoYes NOAHYesNo Calibration parameters from NLDAS 1/8 degree grid (Mitchell et al 2004) – no further calibration performed Meteorological Inputs: NCEP/NCAR reanalysis, 1950-1999

16 Individual Model Results

17 Monthly Avg Flow Monthly RMSE

18 Individual Model Results VIC is clearly the best –Captures base flow, timing of peak flow –Lowest RMSE except for June –Magnitude of peak flow a little low SAC is second –No base flow; peak flow is early but magnitude is close to observed NOAH is last –No base flow; peak flow is 1-2 months early and far too small (high evaporation) We expect VIC to get the highest weight, followed by SAC

19 Comparison of Two Methods Attempt to find method that gives best fit to training data 1950-1999 Method 1 –monthly weights –constant LN-2 transformation –no bias correction –preserves month-to-month hydrograph shape –we expect VIC to get the highest weight here Method 2 –monthly weights –monthly LN-3 transformation –monthly bias correction –destroys month-to-month hydrograph shape –only interannual signals matter

20 Method 1

21

22

23

24 Good fit to training set We only see a very slight improvement over the VIC model alone Surely there is some useful information in the SAC model?

25 Method 2 June September

26 Method 2 June Flow, 1975-1995 September Flow, 1975-1995

27 Method 2 June LN-3 Transformed Flow, 1975-1995 September LN-3 Transformed Flow, 1975-1995

28 Method 2 June LN-3 & Bias-Corrected Flow, 1975-1995 Sept LN-3 & Bias-Corrected Flow, 1975-1995

29 Method 2

30

31

32

33 Applying distinct bias corrections for each (model, month) gives much more accurate fit to the training set Surprisingly, SAC was favored over VIC SAC has a stronger interannual correlation with observations than VIC does

34 What have we done to achieve this improvement? Our post-processing contributes a substantial portion of the signal to the ensemble mean The only signal supplied by the models is the interannual variation in the flows of each month

35 Consequences? Our LN-3 parameters and bias corrections are the best fit to the training data set (1950-1999). Will they remain a good fit? Can we infer anything about physical processes from the ensemble weights? Is it OK to use a model that gets the right answer for reasons we don’t understand?

36 Next Steps Validation: try forecasting years 2000-2005 Can we improve the ensemble prediction without forcing a good fit? Try removing the “Gaussian” requirement from the weighting process: non-parametric comparisons? Do these models get the same relative weights in other basins? Why does SAC have such a high correlation with observed interannual variability, yet low correlation with month-to-month variability? What physical process is responsible? Should we replace SAC and NOAH with better-fitting models?

37 References Wood, A.W., Maurer, E.P., Kumar, A. and D.P. Lettenmaier, 2002. Long Range Experimental Hydrologic Forecasting for the Eastern U.S. J. Geophysical Research, VOL. 107, NO. D20, October. Raftery, A.E., F. Balabdaoui, T. Gneiting, and M. Polakowski, 2005. Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174.

38

39 Multi-Model Ensembles Bayesian Model Averaging (BMA) (Raftery et al, 2005) Estimate observations (y) via linear combination of model forecasts (f k ): p(y|f 1,…f K ) = Σw k g(y|f k ) where p(y|f 1,…f K ) = probability of a given observation based on the set of model forecasts w k = model weight, related to correlation w/ obs g(y|f k ) = probability of a given observation based on forecast from model k alone

40 Multi-Model Ensembles g(y|f k ) assumed to be Gaussian ~ N(f k,σ k ) BMA algorithm estimates w k and σ k Ensemble mean p(y|f 1,…f K ) has a distribution as well → can estimate exceedance probabilities

41 Scatter plots here – one for high weight, one for low weight

42 Describe example: 3 models, with weights and sigmas At certain time, model 1 predicts a value of f1, model 2 … f2, etc.

43 BMA - example

44

45

46

47 Trade-offs Increased accuracy under current conditions Insight into strengths/weaknesses of each model Insight into which processes dominate in which areas vs. Obscuring of physical processes by post- processing of results Diminished accuracy under changing conditions

48 Bias Correction Adjust model mean and/or variance to match observations Constant, seasonal, or monthly corrections More accurate prediction, but Influence of physical processes is obscured If climate changes, bias correction may need to be updated


Download ppt "Machiavellian Forecasting: do the ends justify the means? Ted Bohn UW CEE UBC/UW 2005 Hydrology and Water Resources Symposium."

Similar presentations


Ads by Google