Download presentation
Presentation is loading. Please wait.
Published byMark Cain Modified over 8 years ago
1
A medium range probabilistic quantitative hydrologic forecast system for global application Ph.D. defense - Nathalie Voisin March 29 2010 Civil and Environmental Engineering University of Washington Committee: Drs. Dennis Lettenmaier (Chair-UW), Greg Hakim (GSR-UW), Stephen Burges (UW), Richard Palmer (UW, now at UMass Amherst), John Schaake (consultant to NWS), Eric Wood (Princeton U.), Ad de Roo (EU-JRC).
2
Background Limpopo 2000South Asia 2000 Bangladesh 2004Horn of Africa 2004 Zambezi 2001,2007,2008 2
3
Objective Develop a medium range probabilistic quantitative hydrologic forecast system applicable globally: ▫Using only (quasi-) globally available tools: Global Circulation Model ensemble weather forecasts High spatial resolution satellite-based remote sensing ▫Using a semi distributed hydrology model applicable for different basin sizes, not basin dependent flow forecasts at several locations within large ungauged basins ▫Daily time steps, up to 2 weeks lead time ▫Reliable and accurate for potential real time decision in areas with no flood warning system, sparse in situ observations (radars, gauge stations, etc) or no regional atmospheric model. 3
4
Forecast scheme Chapter 1 Chapter 3 Chapter 2 Initial State 4
5
Outline 1.Preliminary study: Evaluation of global precipitation products Voisin, N., A.W. Wood and D.P. Lettenmaier, 2008: Evaluation of precipitation products for global hydrological prediction. J. of Hydrometeorology, 9 (3), 388-407. 2.Calibration and downscaling methods for probabilistic quantitative precipitation forecasts Voisin, N., J.C. Schaake, and D.P. Lettenmaier, 2010: Calibration and downscaling methods for quantitative ensemble precipitation forecasts, Weather And Forecasting (in review) 3.Evaluation of the medium range probabilistic hydrologic forecast system Voisin, N., F. Pappenberger, D.P. Lettenmaier, R. Buizza, and J.C. Schaake, 2010: Application of a medium range probabilistic hydrological forecast system to the Ohio River Basin, Weather And Forecasting (to be submitted) 4.Conclusions 5
6
Chapter 1- Evaluation of precipitation products for global hydrological prediction Science questions: i) How do satellite precipitation products and weather model precipitation analysis fields compare with gauge-based products over large river basins? ii) What are the impacts on simulated hydrologic simulations of those precipitation product differences? 6
7
Experimental Design Hydrology model VIC Spinup using observed A2006 1979-96 precip., temp., and wind. Adam et al. (2006) – A2006 (Obs. Precip.) GPCP 1dd (Satellite Precip.) ERA-40 (GCM Precip.) 1997-99 A2006 Obs. Temp. and Wind Simulated Hydrology Variables ? ? ? ?? ? 1997-99 Simulated Hydrology Variables 7
8
Results - precipitation differences globally 1997-99 mean daily precipitation (mm/day) 1997-99 mean daily precipitation Relative Error (%) 8
9
Results – Effect on simulated hydrology variables Basin scale GPCP 1ddERA-40 A2006OBS GRDC P : precip. (mm/year) R: runoff (mm/year) D: discharge (cms) DO: Observed discharge Truth in regions with sparse in-situ gauge station network? 9
10
Conclusions - 1 Science questions: -Runoff (and flow) most sensitive ( more than evapotranspiration) -Largest disagreement in areas with sparse gauge station network (Africa) Implications for the flood forecast scheme: - Satellite-based quasi global fine spatial resolution precipitation dataset (TRMM research product, TMPA): -For warming up the hydrology model -As reference for calibrating and downscaling precipitation forecasts -Apparent biases over some portions of the global land areas 10
11
Chapter 2- Calibration and downscaling methods 11
12
Chapter 2- Calibration and downscaling methods Science questions : 1.What statistical techniques are most appropriate for calibrating and downscaling global ensemble weather forecasts for daily hydrologic forecasting with realistic precipitation patterns that improve the original ensemble forecast skill? 2.Are the statistical techniques more skillful when using a high quality gridded station precipitation observations for local application instead of using satellite-based precipitation (TMPA)? → Ohio River Basin, 2002-2006 period 12
13
Meteorological vs. Hydrological approaches to calibrate and downscale weather forecasts Meteorological approaches -Spatially distributed forecasts -Large domains -Probabilistic forecasts (thresholds) Model Output Statistics (Glahn and Lowry 1972), Analog methods (Hamill et al. 2006), Bayesian Average Model Averaging (Sloughter et al. 2007, Berrocal et al. 2008) Hydrological approaches -Probabilistic and quantitative forecasts -Spatially distributed or mean areal values -Usually made for one basin at the time Probabilistic Quantitative Precipitation Estimation (Seo et al. 2000), NWS EPP (Schaake et al. 2007), West Wide Seasonal Flow Forecast approach (Wood and Lettenmaier 2006) 13
14
Adaptations of 2 approaches for global application 2 methods were adapted: ▫Bias correction and statistical downscaling (BCSD) for seasonal flow prediction (Wood et al. 2002): Daily time scale Global application i.e. not basin dependent Application to a probabilistic weather forecast ▫Analog methods (Hamill and Whitaker 2006) Quantitative forecasts 14
15
Adaptations to BCSD approach Used for seasonal flow forecasts (Wood et al. 2002, Wood and Lettenmaier 2006): ▫bias corrected monthly deterministic forecast ▫use ESP to create an ensemble (Schaake Shuffle type) – basin dependent Adaptations: → Daily time scale: daily bias correction, precipitation intermittency → Global application: spatial disaggregation performed on a cell-by-cell basis → Application to ensemble precipitation forecasts: ▫Each member is calibrated and downscaled independently ▫Schaake Shuffle used to create an observed spatio-temporal rank structure 15
16
Schaake Shuffle –constructing a spatio-temporal rank structure between variables - Each ensemble forecast has 5D : Fcst [space, lead time, variables, member] - Rank structure required for hydrologic simulations Longitude(x) Latitude (y) Ensemble member (m) Time (10 day) 4 variables (precip, Tmin, Tmax, wind) … 16
17
Adaptations of the Analog method ( Hamill and Whitaker 2006) FCST day n 1 degree Retrosp. FCST dataset, +/- 45 days around day n 1 degree OBS D DAY OBS n +/- 45 days Year-1 FCST D DAY FCST n +/- 45 days Year-1 Corresp. Observation (TMPA) 0.25 degree Downscaled FCST day n 0.25 degree 3 methods for choosing the analog: -Closest in terms of RMSD, for each ensemble (analog RMSD) -15 closest in terms of RMSD, to the ensemble mean fcst (analog RMSDmean) -Closest in terms of rank, for each ensemble (analog rank) 5 degree 17
18
Adaptation of the Analog method ( Hamill and Whitaker 2006) ▫Choose an analog for the entire domain (Maurer et al. 2008) : entire US, or the globe Ensure spatial rank structure Need a long dataset of retrospective forecast-observation. ▫Moving spatial window (Hamill and Whitaker 2006): 5x5 degree window (25 grid points) Choose analog based on ΣRMSD, or Σ(Δrank) Date of analog is assigned to the center grid point ▫Schaake Shuffle 18
19
Evaluation of the calibration and downscaling approaches 1.Skills to look at: ▫Deterministic value (ensemble mean): bias, RMSE, correlation ▫Probabilistic quantitative forecasts: rank histograms, continuous rank probability scores, relative operating characteristic (ROC) ▫Spatial coherence 2.Isolate raw forecast skills and downscaling methods’ skill ▫Forecast categories ( not observation, except ROC) ▫Use interpolated forecasts as benchmark 3.Appropriate for different time and spatial scales ▫Evaluation for daily forecasts at 0.25 AND 1-degree forecasts ▫Evaluation of 5-day accumulation forecasts 19
20
Verification of Precipitation Forecasts 20 Annual verification Ohio River Basin 2002-2006 period 1826 10-day forecasts 848 0.25 o grid cells
21
Probabilistic quantitative forecast verification measures the difference between the predicted and observed cumulative distribution functions: resolution, reliability, predictability For one forecast(gridcell, lead time, t): Continuous Rank Probability Score d1d1 d2d2 d3d3 d Nmember magnitude Prob Fcst ∆P 1 2 ∆P N 2 0 1 21 111
22
Reliability of the ensemble spread 22 Rank histograms ( Talagrand diagram)
23
Spatial coherence 23
24
Conclusions - 2 Science questions: Two approaches to calibrate and downscale probabilistic quantitative precipitation forecasts were developed and evaluated -not basin dependent and use only globally available resources -Improve the skill of raw forecasts : ensemble reliability, some accuracy Not shown: ROC(resolution and discrimination), sensitivity to the size of moving spatial window, seasonal analysis, use of gridded gauge station precipitation dataset in lieu of TMPA Implications for the flood forecast scheme: → implement the analog RMSDmean method -Most realistic precipitation patterns -Improve ensemble spread reliability and bias, best predictability 24
25
Chapter 3- Application of a probabilistic quantitative hydrological forecast system to the Ohio R. basin 25
26
Chapter 3- Application of a probabilistic quantitative hydrological forecast system to the Ohio R. basin Science questions: 1.What is the forecast skill of the system? 2.What are the resulting hydrologic forecast errors related to errors in the downscaled weather forecasts? 3.Is the forecast skill different for basins of different size? 26
27
Calibration of the hydrology and routing models -Differences between TMPA and observed precipitation -Daily flow fluctuations due to navigation, flood control, hydropower generation -Uncertainties in VIC and routing models physical processes, structure and parameters 27 → Use “simulated observed flow” as reference → Focus on weather forecasts errors
28
15-member ensemble 15-day daily forecast downscaled to 0.25 degree: Day 1-10: ECMWF EPS fcst -Interpolated precipitation Day 11-15: - Zero precip. VIC 2003-2007 period 15-member ensemble 15-day daily distributed runoff forecast Routing model 2003-2007 period 15-member ensemble 15-day flow forecast at 4 stations with different drainage areas VIC 15-day simulation ECMWF analysis fields: with TMPA precipitation Daily, 2003-2007 period, 0.25 degree Daily 2003-2007 simulated runoff, soil moisture, SWE Substitute for observed runoff Routing model 15-day simulation 2003-2007 simulated daily flow Substitute for observations Reference (substitute for observations, Climatology) Forecast – InterpolationForecast – Analog RMSDmean 15-member ensemble 15-day daily forecast downscaled to 0.25 degree : Day 1-10: ECMWF EPS fcst -Downscaled precipitation using analog RMSDmean Day 11-15: - Zero precip. Initial hydrologic state Initial flow conditions 15-member ensemble 15-day daily distributed runoff forecast 15-member ensemble 15-day flow forecast at 4 stations with different drainage areas VIC 15-day simulation Routing model 15-day simulation Initial hydrologic state Initial flow conditions … … … … … … … … 1 23 Deterministic 15-day daily fcst Day 1-15: - Zero precip. 4 Forecast – Clim & null Precip 15-day daily distributed runoff forecast 15-day flow forecast VIC 15-day simulation Routing model 15-day simulation Initial hydrologic state Initial flow conditions 28
29
Verification of ensemble runoff forecasts Ohio River Basin 2003-2007 1826 15-day forecasts (10 day fcst, +5 days 0-precip) 848 0.25 o grid cells 29
30
Ensemble flow forecasts verification 30
31
Ensemble flow forecasts verification Ensemble reliability at Metropolis and Elizabeth 31
32
Conclusions - 3 A probabilistic quantitative hydrologic forecast system for global application was evaluated: - spatially distributed runoff probabilistic forecasts Through calibration of the ensemble weather forecasts, runoff forecasts were calibrated About same skill as spatially distributed ensemble precipitation forecasts: Improved mean errors, ensemble reliability (Fcst probability) Maintained predictability, resolution Fine spatial resolution patterns - probabilistic flow forecasts: Flow forecasts calibrated: mean errors and forecast probabilities Flow forecast skill extended due to baseflow persistence (initial conditions) 32
33
Conclusions Applications for the probabilistic quantitative hydrologic forecast system : - global spatially distributed hydrologic probabilistic forecasts Potential for landslide hazards forecasting (soil moisture forecasts) Potential for flood extent mapping forecasting (coarse spatial resolution) - probabilistic flow forecasts: Flood forecasts/warning in ungauged large basins Skill for 1-12+ day forecasts depending on the drainage area of the flow forecast location 33
34
Acknowledgments Dennis Lettenmaier, supervisor Committee members: John Schaake, Eric Wood, Stephen Burges, Richard Palmer, Ad de Roo, Greg Hakim Collaborators : Andrew Wood (NWS), Florian Pappenberger and Roberto Buizza (ECMWF), ECMWF, Jutta Thielen (EU-JRC) Colleagues : Surface Hydrology Group, Elizabeth Clark and Ted Bohn Family and friends, … and the audience 34
35
Thank you – Questions? 35
36
Ensemble forecast verification Relative Operating characteristic (ROC) Plot Hit Rate vs. False Alarm Rate for a set of increasing probability threshold to make the yes/no decision. Diagonal = no skill Skill if above the 1:1 line Measure resolution A bias forecast may still have good discrimination. 36
37
Ensemble Forecast Verification Ensemble reliability: Reliability plot: PROBABILISTIC fcsts ▫Choose an event = event specific ▫Each time the event was forecasted with a specific probability ( 20%, 40%, etc), how many times did it happen ( observation >= chosen event). It requires a sharpness diagram to give the confidence in each point. It should be on a 1:1 line. Talagrand diagram (rank): PROBABILISTIC QUANTITAVE fcsts ▫Give a rank to the observation with respect to the ensemble forecast ( 0 if obs below all ensemble members, Nmember + 1 if obs larger ) ▫Is uniform if ensemble spread is reliable, (inverse) U-shaped if ensemble is too small (large), asymetric is systematic bias. 37
38
Measures According to MURPHY & WINKLER (1987) a forecast system has five major attributes that can be assessed statistically; these are: (1)Reliability: Average degree to which the frequency of occurrence of the event agrees with the forecast probability. (2) Resolution: Ability of the forecast system to systematically distinguish between subsets of the sample (set of events) with different frequencies of occurrence. (3) Sharpness: Sharpness refers to the distribution of forecast probabilities over a verification sample. If probabilities near 0 and 1 (100 %) occur often, then the forecast is said to be sharp. (4) Discrimination: This is the ability of the forecast system to distinguish between occurrences of the event and non-occurrences by forecasting a different set of probabilities before the occurrences of the event than before the non-occurrences. (5) Association: The strength of the linear relationship between the forecasts and the observations; i.e. the ensemble spread and forecast error. 38
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.