Download presentation
Presentation is loading. Please wait.
Published byDwight Owens Modified over 9 years ago
1
The predictability of streamflow across the contiguous USA Key staff: Andy Newman, Kevin Sampson, Jason Craig, and Tom Hopson
2
Outline Problem definition and progress toward project objectives System development –Data preparation & model calibration –Software for probabilistic QPE and statistical downscaling Regional variations in model performance Regional variations in predictability Summary
3
Local-scale hydrologic analyses Hydrologic data assimilation Meteorological Analyses ensemble QPE, etc. (from STEP) Hydrologic model 3D, physics-based (from WRF-Hydro) Local-scale probabilistic meteorological forecasts Forecast blending methods Hydrologic model 3D, physics-based (from WRF-Hydro) Short-term forecasts merged radar & high-res NWP (from STEP) Local-scale probabilistic hydrologic forecasts Medium-range forecasts global NWP models (from outside NCAR) Probabilistic downscaling Seasonal forecasts statistical and dynamical (from outside NCAR) Conditioned weather generators Statistical post-processing methods
4
Progress toward project objectives Objective Status Assess performance of current hydrologic models used by the NWS, and assess dependence of model performance on Physical characteristics of the basins (climate, vegetation, soils, topography) Reliability of quantitative precipitation estimates (e.g., station density) Assess the relative importance of hydrologic and meteorological/ climatological information in determining forecast skill Conduct research to improve estimates of uncertainty During model spin-up During the forecast period Conduct research to reduce forecast uncertainty Better hydrologic models Better weather forecasts and climate outlooks Adoption of hydrologic data assimilation methods and statistical post-processing methods Examine impact of different sources of uncertainty in water management decisions Established forcing-response series for ~600 basins Calibrated SNOW-17/SAC Evaluated regional variations in model performance Evaluating regional variations in forecast skill Developed software for probabilistic QPE and probabilistic downscaling Limited attention given to these tasks, mostly through leverage with other projects Not started
5
Outline Problem definition and progress toward project objectives System development –Data preparation & model calibration –Software for probabilistic QPE and statistical downscaling Results –Regional variations in model performance –Regional variations in predictability Summary
6
Basin Selection – Selected gages in the Hydro-Climatic Data Network ● Gages that have minimal human influence on watercourse and long period of record Selected basins: Colors denote annual precipitation (mm) Large range of conditions covered (Energy and water limited basins)
7
Forcing Data Select Daymet as forcing (http://daymet.ornl.gov/)http://daymet.ornl.gov/ o 1 km gridded product over CONUS at daily resolution o Data is readily available and mimics an operationally available dataset o Derived from only cooperative observer and SNOTEL observations o Primary variables: Tmax, Tmin, and precipitation (PRISM-type elevation correction) o Shortwave down, length of day, vapor pressure, and snow water equivalent (internal Daymet model estimate, based on MTCLIM) Basin and HRU averages o Use USGS hydrologic response unit (HRU) geospatial database and USGS developed pre-processing scripts to define basin areas o Submit to USGS Geo Data Portal to subset Daymet data and generate areal averaged forcing data for basins o Basin and HRU averages generated for period: 1 Jan 1980 – 31 Dec 2010 (31 years) Potential evapotranspiration estimated using Priestly-Taylor equation (PT) – no winds from Daymet
8
Basin Forcing Data Generation Generate forcing data for 1.entire basin (lump) 2.USGS defined HRUs (HRU) 3.elevation bands – still in progress
9
Model and calibration procedure ●Model National Weather Service operational Snow-17 and Sacramento-soil moisture accounting model (Snow-17/SAC) ●Calibration Couple to shuffled complex evolution (SCE) global optimization routine Lump calibration: Initial calibration using RMSE of daily streamflow as objective function HRU and elevation band calibration: Use spatial regularization methodology (Pokhrel & Gupta 2010) to reduce number of parameters calibrated where m, b, a are calibrated for each final parameter estimate ϕ j with a priori parameter θ j Use NWS generated a priori values for: MFMAX, MFMIN, UADJ (Snow-17) and UZTWM, UZFWM, UZK, REXP, LZTWM, LZFSM, LZFPM, LZSK, LZPK, PFREE, ZPERC (Sac) Other lump calibrated parameters are calibrated (uniform across HRUs, thus no need for multipliers)
10
Lump Calibration Results Areas with seasonal snow perform best More arid regions and Southeast perform worst 552 gauges considered currently Pac NW still pending, ~95 more gages
11
Lump Calibration Results Validation scores generally lower Least deline in Upper Colo, California, Rio Grande (mountain snowpack) Most decline in Northern & Great Plains (Regions 9-12)
12
Lump Calibration Results NSE decreases 0.1-0.2 for most gauges going to validation phase
13
Model nearly always under predicts high flow during calibration phase Less under prediction during validation Lump Calibration Results—other hydrologic signatures
14
General over prediction of low flows, More frequent under prediction in more arid and seasonal snowpack basins Lump Calibration Results—other hydrologic signatures
15
Slope of Flow Duration Curve (FDC) describes the flashiness of runoff Slope too high in east, too lowin west
16
Does the optimization routine find the global optimum? 10 seeds per basin 22% have worst NSE >0.2 less than best NSE Best calibration not necessarily best validation
17
Initial HRU Results HRU calibration much better in regions with localized heavy rain events Very preliminary, needs much rechecking
18
Outline Problem definition and progress toward project objectives System development –Data preparation & model calibration –Software for probabilistic QPE and statistical downscaling Results –Regional variations in model performance –Regional variations in predictability Summary
19
Software for probabilistic QPE and statistical downscaling General approach –Estimate CDF of precipitation (locally weighted logistic regression to estimate precipitation occurrence; locally-weighted ordinary least squares to estimate precipitation amounts) –Ensembles generated by sampling from the CDFs Probabilistic QPE – based on Clark and Slater (JHM 2006) –Topographic attributes at station locations used to estimate the spatial variability in precipitation Predictor variables: Topographic attributes at station locations (e.g., lat, lon, elev) Dependent variable: Station precipitation Estimate precipitation at a grid cell using regression coefficients from stations Probabilistic statistical downscaling – based on Clark and Hay (JHM 2004) and Gangopadhyay et al. (WRR, 2005) –Atmospheric variables from global-scale NWP models used to estimate station precipitation Predictor variables: From ESRL/PSD GEFS Reforecast Version2 Dependent variable: station precipitation –Locally weighted regression; provides a hybrid between regression-based and analog approaches 19
20
Example quantitative precipitation estimates 20 Reynolds Creek region, Idaho
21
Outline Problem definition and progress toward project objectives System development –Data preparation & model calibration –Software for probabilistic QPE and statistical downscaling Results –Regional variations in model performance –Regional variations in predictability Summary
22
Regional variability in model performance: Temperature Performance decreases with increasing mean annual temperature (more snow in cold basins) Dryer, warmer basins perform worse (discussed further later)
23
Regional variability in model performance: Suitability of the objective function? Fraction of skill score due to x data points 50% of gages have ~35% of error
24
Great Plains, desert southwest, Appalachia tend to have small number of events contributing a large portion of error –Note scale change on bottom figure Regional variability in model performance: Suitability of the objective function?
25
Fraction of Error in worst x points decreases as NSE increases Regional variability in model performance: Suitability of the objective function?
26
Colder gages less error on 100 worst days Cold (warm) dry gauges have least (most) error on 100 worst days Less clear when examining drying ratio (PET/P) Regional variability in model performance: Suitability of the objective function?
27
Outline Problem definition and progress toward project objectives System development –Data preparation & model calibration –Software for probabilistic QPE and statistical downscaling Results –Regional variations in model performance –Regional variations in predictability Summary
28
Regional variations in predictability Research questions: –What is the relative impact of errors in estimates of basin initial conditions and meteorological forecasts? –How does this depend on the hydroclimatic regime, the forecast intialization time, the forecast lead time, etc. 28 Percent differences in June- August streamflow obtained from April 1 SWE perturbations +/- 25% of average state (climatology)
29
Outline Problem definition and progress toward project objectives System development –Data preparation & model calibration –Software for probabilistic QPE and statistical downscaling Results –Regional variations in model performance –Regional variations in predictability Summary
30
Lump calibration shows that performance of Snow- 17/SAC model is best in snow-dominated basins –Calibrated model nearly always has negative high flow bias, positive low flow bias, negative bias in the slope of FDC A small number of data points have a large impact on the results –50% of gages have ~35% of error –High Plains (Regions 9-12) tend to have a small number of events contributing to a large portion of the modeled error –Cold (warm) dry gauges have least (most) error on 100 worst days
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.