Presentation is loading. Please wait.

Presentation is loading. Please wait.

1/59 HL Distributed Hydrologic Modeling Mike Smith Victor Koren, Seann Reed, Ziya Zhang, Fekadu Moreda, Fan Lei, Zhengtao Cui, Dongjun Seo, Shuzheng Cong,

Similar presentations


Presentation on theme: "1/59 HL Distributed Hydrologic Modeling Mike Smith Victor Koren, Seann Reed, Ziya Zhang, Fekadu Moreda, Fan Lei, Zhengtao Cui, Dongjun Seo, Shuzheng Cong,"— Presentation transcript:

1 1/59 HL Distributed Hydrologic Modeling Mike Smith Victor Koren, Seann Reed, Ziya Zhang, Fekadu Moreda, Fan Lei, Zhengtao Cui, Dongjun Seo, Shuzheng Cong, John Schaake DSST Feb 24, 2006

2 2/59 Overview Today: –Goals, expectations, applicability –R&D Next Call –Development Strategy –Implementation –RFC experiences

3 3/59 Goals and Expectations Potential –History Lumped modeling took years and is a good example We’re first to do operational forecasting –Expectations ‘As good or better than lumped’ Limited experience with calibration May not yet show (statistical) improvement in all cases due to errors and insufficient spatial variability of precipitation and basin features… but is proper future direction! –New capabilities Gridded water balance values and variables e.g., soil moisture Flash Flood e.g., statistical distributed Land Use Land cover changes

4 4/59 Expectations: Effect of Data Errors and Modeling Scale Relative Sub-basin Scale A/A k 110 100 10 15 20 25 30 0 5 Relative error, Ek, % (lumped) (distributed) Noise 0% 25% 50% 75% Data errors (noise) may mask the benefits of fine scale modeling. In some cases, they may make the results worse than lumped simulations. Simulation error compared to fully distributed ‘Truth’ is simulation from 100 sub- basin model clean data

5 5/59 Rationale Scientific motivation –Finer scales > better results –Data availability Field requests NOAA Water Resources Program NIDIS Goals and Expectations

6 6/59 Applicability Distributed models applicable everywhere Issues –Data availability and quality needed to realize benefits –Parameterization –Calibration Goals and Expectations

7 7/59 Measures of Improvement Hydrographs at points (DMIP 1) –Guidance from RFC Spatial –Runoff –Soil moisture –Point to grid Goals and Expectations

8 8/59 HL R&D Strategy Conduct in-house work Collaborate with partners –U. Arizona, Penn St. University –DMIP 1, 2 –ETL Work closely with RFC prototypes –ABRFC, WGRFC: DMS 1.0 –MARFC, CBRFC: in-house Publish results NAS Review of AHPS Science Goal: produce models, tools, guidelines to improve field office operations

9 9/59 R&D Topics 1.Parameterization/calibration (with U. Arizona and Penn State U.) 2.Soil Moisture 3.Flash Flood Modeling: statistical distributed model, other 4.Snow (Snow-17 and energy budget models in HL- RDHM) 5.DMIP 2 6.Data assimilation (DJ Seo) 7.Links to FLDWAV 8.Impacts of spatial variability of precipitation 9.Data issues

10 10/59 1. Distributed Model Parameterization-Calibration Explore STATSGO data as its has national coverage (available in CAP) Explore SSURGO fine scale soils data for initial SAC model parameters (deliverable: parameter data sets in CAP) Investigate auto-calibration techniques –HL: Simplified Line Search (SLS) with Koren’s initial SAC estimates. –U. Arizona: Multi-objective techniques with HL-RDHM and Koren’s initial SAC parameters. Continue expert-manual calibration Strategy for Sacramento Model

11 11/59 Measured data at outlet (discharge, top width, cross-section) Spatially variable basin properties (slope, area, drainage density) Observed outlet hydrographs Channel routing parameters at outlets Variable channel routing parameters Geomorpho- logical relations Fitting curve parameter adjustment Variable basin properties (STATSGO 1x1 km grids: Soil texture, Hydrologic Soil Group, Land cover/use) Outlet calibrated parameters Variable model parameters Area average parameters Transform. relationships Rescaled variable parameters Channel routing parameters Water balance parameters Scale adjusted parameters Observed outlet hydrograph Lumped, Semi-lumped calibration Observed outlet hydrograph HL-RDHM Parameterization - Calibration Steps Lumped Hourly Calb. Spatial data 1. Distributed Model Parameterization/Calibration

12 12/59 HL: Mod STATSGO SAC parms. HL: SSURGO SAC parms HL Lumped auto calibration using SCE and SLS Gridded Parm Values U. Arizona: Parameter Uncertainty HL: Climate SAC Parm adjustment (large area runs) U. Az: Multi-objective Optimization of 1) HL- RDHM adj. factors, 2) grid parameters Parameterization and Calibration R&D Strategy Combine Improved a priori parameter estimates with Auto-calibration techniques a priori parameter estimates auto-calibration techniques HL Dist auto calibration of HL- RDHM adj. factors: SCE, SLS HL: STATSGO SAC parms. (in CAP at RFCs) SCE: Shuffled Complex Evolution SLS: Simplified Line Search HL Dist auto calibration of HL- RDHM grid parms: SLS 1 2 3 Reduce uncertainty

13 13/59 Polygon – a soil map unit; it represents an area dominated by one to three kinds of soil Components – are different kinds of soil. Components are each separate soils with individual properties and are grouped together for simplicity's sake when characterizing the map unit. Horizons – are layers of soil that are approximately parallel to the surface. Up to six horizons may be recorded for each soil component. Polygon Components Horizons Description of SSURGO data* Soils Data for SAC Parameters * The Penn State Cooperative Extension, Geospatial Technology Program (GTP) Land Analysis Lab 1. Distributed Model Parameterization/Calibration

14 14/59 Demonstration of scale difference between polygons in STATSGO and SSURGO SSURGO STATSGO Soils Data for SAC Parameters 1. Distributed Model Parameterization/Calibration

15 15/59 2 km Grid Connectivity for Distributed Channel Routing Oklahoma Arkansas Basin Locations and Land Cover 1 2 3 5 4 6 8 7 10 9 11 1 2 3 5 4 6 8 7 10 9 11 12 Results of SSURGO and STATSGO Parameters for Distributed Modeling 1. Distributed Model Parameterization/Calibration

16 16/59 R m : Modified correlation coefficient. It is calculated by reducing normal correlation coefficient by the ratio of the standard deviations of the observed and simulated hydrographs. Comparison of R m for whole time series of 11 basins Overall R m--SSURGO-based > R m--STATSGO-based for most basins More physically-based representation of the soil layers! More detailed spatial variability Results of SSURGO and STATSGO Parameters for Distributed Modeling 1. Distributed Model Parameterization Calibration

17 17/59 Hydrograph Comparison __ Observed flow __ SSURGO-based __ STATSGO-based Cave Springs STATSGO Results of SSURGO and STATSGO Parameters for Distributed Modeling SSURGO 1. Distributed Model Parameterization/Calibration

18 18/59 Comparison of SCE and SLS calibration processes 1) SLS needs less function evaluations, but it leads to similar result; 2) SLS stops much faster and closer to the start point (a priori parameters); 3) On some basins, SCE misses the nearest ‘best’ solution. 4) SLS in AB-OPT Distance from starting parameters 1. Distributed Model Parameterization/Calibration

19 19/59 HL-RDHM Kinematic Wave Solution Uses implicit finite difference solution technique Need Q vs. A for each cell to implement distributed routing – Derive relationship at outlet using observed data – Extrapolate upstream using empirical/theoretical relationships Two methods are available in HL-RDHM –‘Rating curve’ method : parameters a and b in Q = aA b estimated based on empirical relationship –‘Channel shape’ method: parameters estimated from estimates of slope, roughness, approximate channel shape, and Chezy-Manning equation 1. Distributed Model Parameterization/Calibration

20 20/59 2. Solve for  and  using streamflow measurement data: Channel Width (  ) and Shape (  ) Parameter Estimation 1. Assume relationship between top width and depth: B H Illinois River at Watts, OK Example cross section  = 36.6,  = 0.6) 1. Distributed Model Parameterization/Calibration

21 21/59 Estimate Upstream Parameters Using Relationships from Geomorphology 1.Channel Model Parameterization 1. Distributed Model Parameterization Calibration

22 22/59 Probabilistic Channel Routing Parameters Basic concepts –Discharge – cross-section relationship obeys multiscale lognormal bivariate Gaussian distribution –The scale dependence of hydraulic geometry is a result of the asymmetry in channel cross-section (CS) Application –Define CS geometry as a function of scale from site measurements –Define channel planform geometry as a function of scale –Define floodplain CS geometry as a function of scale from DEM –Monte-Carlo simulations to fit to multiscale lognormal model 1. Parameterization 1. Distributed Model Parameterization/Calibration

23 23/59 DERIVED PROBABILISTIC HG: Accounting for the variability of channel and floodplain shapes. Exp{E[lnC A |lnQ]} Exp{E[lnV|lnQ]} marginal PDFs of discharge 1. Distributed Model Parameterization/Calibration

24 24/59 Probabilistic Channel Routing Parameters: BLUO2 Hydrographs With flood plain Without flood plain Observed 1. Distributed Model Parameterization/Calibration

25 25/59 2. Distributed Modeling and Soil Moisture Use for calibration, verification of models New products and services –NCRFC: WFO request –OHRFC: initialize MM5 –NIDIS –NOAA Water Resources 2. Soil Moisture

26 26/59 UZTWC UZFWC LZTWC LZFSC LZFPC UZTWC UZFWC LZTWC LZFSC LZFPC SMC1 SMC3 SMC4 SMC5 SMC2 Sacramento Model Storages Sacramento Model Storages Physically-based Soil Layers and Soil Moisture Modified Sacramento Soil Moisture Accounting Model In each grid and in each time step, transform conceptual soil water content to physically-based water content Modified Sacramento Soil Moisture Accounting Model Gridded precipitation, temperature CONUS scale 4km gridded soil moisture products using SAC and Snow-17 (developed for Frozen Ground) 2. Soil Moisture

27 27/59 Soil temperature Soil moisture Computed and observed soil Moisture and temperature: Valdai, Russia, 1972-1978 Validation of Modified Sacramento Model 2. Soil Moisture

28 28/59 Validation of Modified Sacramento Model Comparison of observed, non-frozen ground, and frozen ground simulations: Root River, MN observed Frozen ground Non frozen ground 2. Soil Moisture

29 29/59 Modified SAC Publications –Koren, 2005. “Physically-Based Parameterization of Frozen Ground Effects: Sensitivity to Soil Properties” VIIth IAHS Scientific Assembly, Session 7.2, Brazil, April. –Koren, 2003. Parameterization of Soil Moisture-Heat Transfer Processes for Conceptual Hydrological Models”, paper EAE03-A-06486 HS18-1TU1P- 0390, AGU-EGU, Nice, France, April. –Mitchell, K., Koren, others, 2002. “Reducing near-surface cool/moist biases over snowpack and early spring wet soils in NCEP ETA model forecasts via land surface model upgrades”, Paper J1.1, 16 th AMS Hydrology Conference, Orlando, Florida, January –Koren et al., 1999. “A parameterization of snowpack and frozen ground intended for NCEP weather and climate models”, J. Geophysical Research, 104, D16, 19,569-19,585. –Koren, et al., 1999. “Validation of a snow-frozen ground parameterization of the ETA model”, 14th Conference on Hydrology, 10-15 January 1999, Dallas, TX, by the AMS, Boston MA, pp. 410-413. –http://www.nws.noaa.gov/oh/hrl/frzgrd/index.html 2. Soil Moisture

30 30/59 NOAA Water Resources Program: Prototype Products Initial efforts focus on CONUS soil moisture Soil moisture (m 3 /m 3 ) HL-RDHM soil moisture for April 5m 2002 12z 2. Soil Moisture

31 31/59 HL-RDHM MOSAIC Source: Moreda et al., 2005. Lower 30cmUpper 10cm Comparison of Soil Moisture Estimates HL-RDHM: Higher Correlation 2. Soil Moisture

32 32/59 Forecasted frequencies A Statistical-Distributed Model for Flash Flood Forecasting at Ungauged Locations HistoricalReal-time simulated historical peaks (Q sp ) Simulated peaks distribution (Q sp ) (unique for each cell) Archived QPE Initial hydro model states Statistical Post-processor Distributed hydrologic model (HL- RDHM) Real- time QPE/QP F Max forecasted peaks Why a frequency- based approach?  Frequency grids provide a well-understood historical context for characterizing flood severity; values relate to engineering design criteria for culverts, detention ponds, etc.  Computation of frequencies using model- based statistical distributions can inherently correct for model biases Next step to define requirements for prototype 3. Flash Flood

33 33/59 14 UTC15 UTC 16 UTC17 UTC Statistical Distributed Flash Flood Modeling- Example Forecasted Frequency Grids Available at 4 Times on 1/4/1998 In these examples, frequencies are derived from routed flows, demonstrating the capability to forecast floods in locations downstream of where the rainfall occurred. 3. Flash Flood

34 34/59 Eldon (795 km2) Dutch (105 km2) Implicit statistical adjustment ~11 hr lead time ~1 hr lead time Statistical Distributed Flash Flood Modeling - Example Forecast Grid and Corresponding Forecast Hydrographs for 1/4/1998 15z 3. Flash Flood

35 35/59 spatial scale modeling capability Where does Site Specific fit? RFC WFO In this domain: -Statistical Distributed -Distributed -Site Specific with snow Var, routing Site Specific, FFG, other Perception of Modeling Trends 3. Flash Flood

36 36/59 Transition from Snow-17 to Energy Budget Model for RFC Operations: HL Activities New data for Snow-17 (wind speed, etc) Time % Model Use 0 100 Today Today + (?) years Snow-17 at RFCs Energy Budget Model Calb-OFS biases Distributed Snow-17 Snow-17 MODs based On Snodas Sensitivity of Energy Budget model to data errors Use of Snodas Output in Snow-17 HL Activities Use of Snodas Output in runoff models 4. Distributed modeling and snow

37 37/59 Distributed Snow-17 Strategy: use distributed Snow-17 as a step in the migration to energy budget modeling: what can we learn? Snow-17now in HL-RDHM Tested in MARFC area and over CONUS (delivered historical data) Further testing in DMIP 2 Gridded Snow-17 parameters for CONUS under review (could be delivered in CAP) Related work: data needs for energy budget snow models 4. Distributed modeling and snow

38 38/59 Current approach SNOW-17 model within HL-RDHM SNOW-17 model is run at each pixel Gridded precipitation from multi-sensor products are provided at each pixel Gridded temperature inputs are provided by using DEM and regional temperature lapse rate The area depletion curve is removed because of distributed approach Other parameters are studied either to replace them with physical properties or relate them to these properties, e.g., SCF. 4. Distributed modeling and snow

39 39/59 SAC-SMA or CONT-API Channel routing SNOW -17 P, T & ET surface runoff Rain + melt Flows and State variables base flow hillslope routing  Gridded (or small basin) structure  Independent snow and rainfall- runoff models for each grid cell  Hillslope routing of runoff  Channel routing (kinematic & Muskingum-Cunge) HL-RDHM Features: 4. Distributed modeling and snow

40 40/59 4. Distributed modeling and snow Parameterization of Distributed Snow-17 Min Melt Factor Max Melt Factor Derived from: 1.Aspect 2.Forest Type 3.Forest Cover, % 4.Anderson’s rec’s.

41 41/59 Snow Cover Simulation Energy-budget model assimilated Distributed Snow-17 December 12, 2003 12Z …Case Study Snow cover obtained from energy-budget and Snow-17 model qualitatively agree well 4. Distributed modeling and snow

42 42/59 Flow simulation during snow periods (using lumped API model parms in each grid) 4. Distributed modeling and snow

43 43/59 5. DMIP 2 –HL distributed model is worthy of implementation: we need to improve it for RFC use in all geographic regions –Partial funding from Water Resources –Much outside interest –HMT collaboration

44 44/59 DMIP 2 Science Questions Confirm basic DMIP 1 conclusions with a longer validation period and more test basins Improve our understanding of distributed model accuracy for small, interior point simulations: flash flood scenarios Evaluate new forcing data sets (e.g., HMT) Evaluate the performance of distributed models in prediction mode Use available soil moisture data to evaluate the physics of distributed models Improve our understanding of the way routing schemes contribute to the success of distributed models Continue to gain insights into the interplay among spatial variability in rainfall, physiographic features, and basin response, specifically in mountainous basins Improve our understanding of scale/data issues in mountainous area hydrology Improve our ability to characterize simulation and forecast uncertainty in different hydrologic regimes Investigate data density/quality needs in mountainous areas (Georgakakos et al., 1999; Tsintikidis, et al., 2002) 5. DMIP 2

45 45/59 Distributed Model Intercomparison Project (DMIP) Nevada California Texas Oklahoma Arkansas Missouri Kansas Elk River Illinois River Blue River American River Carson River Additional Tests in DMIP 1 Basins 1.Routing 2.Soil Moisture 3.Lumped and Distributed 4.Prediction Mode Tests with Complex Hydrology 1.Snow, Rain/snow events 2.Soil Moisture 3.Lumped and Distributed 4.Data Requirements in mtn West Phase 2 Scope HMT 5. DMIP 2

46 46/59 5. DMIP 2

47 47/59 DMIP 2 & HMT-West Research to Operations 1.Basic precip and temp data (gage only gridded) 2.Basic data enhanced by HMT observations: -Network Density 1 -Network Density 2 -Network Density 3 Analyses, conclusions, recommendations for data and tools for RFCs Distributed model simulations –USGS –HL-RDHM –USBR –Others “What new data types are becoming available? What densities of observations are needed? Which models/approaches work best In mountainous areas?” 5. DMIP 2

48 48/59 DMIP 2: Participants Witold Krajewski Praveen Kumar Mario DiLuzio, ARS, TAES Sandra Garcia (Spain) Eldho T. Iype (India) John McHenry, BAMS Konstantine Georgakakos Ken Mitchell (NCEP) Hilaire F. De Smedt (Belgium) HL Vincent Fortin, Canada Robert Wallace, USACE, Vicksburg Murugesu Sivapalan, U. Illinois Hoshin Gupta, U. Arizona Christa Peters-Lidard, NASA David Gochis, NCAR Thian Gan, (Can.) Newsha Ajami (Soroosh) Vazken Andreassian (Fra) George Leavesley (USGS) Kuniyoshi Takeuchi (Japan) Vieux and Associates John England (USBR) Andrew Wood, Dennis Lettenmaier, U. Washington Martyn Clarke, U. Co/CIRES South Florida Water Mngt. District David Tarboton, Utah St. U. David Hartley, NW Hydraulic Consultants Xu Liang, U. Ca. Berkeley Terri Hogue, UCLA Names in red have officially registered, others have shown interest DMIP 2 Overview

49 49/59 Basic DMIP 2 Schedule Feb. 1, 2006: all data for Ok. basins available July 1, 2006: all basic data for western basins available Feb 1, 2007: Ok. simulations due from participants July 1, 2007: basic simulations for western basins due from participants 5. DMIP 2

50 50/59 6. Data Assimilation for Distributed Modeling Needed since manual OFS ‘run-time mods’ will be nearly impossible Strategy based on Variational Assimilation developed and tested for lumped SAC model Initial work in progress

51 51/59 WTTO2 in ABRFCWTTO2 channel network Initial simulation Assimilation period: streamflow, PE, precip 6. Data Assimilation

52 52/59 Comparison of Unadjusted and 4DVAR-Adjusted Model States (WTTO2) 6. Data Assimilation

53 53/59 Channel Routing and Flood Mapping Of Tar River below Rocky Mount Rainfall Data Tarboro Distributed Model of Tar River Basin rain depth Tarboro Estuary Model Rocky Mount 7. Distributed Modeling and Links to FloodWave

54 54/59 floodwave Inflow hydrograph From HL-RDHM 1 2 In this example, HL-RDHM provides: 1.Upstream inflow hydrograph.at 1. 2. 5 lateral inflow hydrographs to floodwave between cross sections 1 and 2. HL-RDHM grid Floodwave lateral inflow reaches 7. Distributed Modeling and FloodWave

55 55/59 Hydrographs at Greenville, Tar River 0 50 100 150 200 250 300 350 400 Date Initial Simulation of Tar River using HL-RMS (no Flood Wave). No calibration. After the warm up period, the simulation is good. Uses only Victor’s a priori parameters. SAC-SMA ‘warm-up’ observed simulated 7. Distributed Modeling and FloodWave: Example

56 56/59 8. Impact of Spatial Variability Question: how much spatial variability in precipitation and basin features is needed to warrant use of a distributed model? Goal: provide guidance/tools to RFCs to help guide implementation of distributed models, i.e., which basins will show most ‘bang for the buck’? Initial tests completed after DMIP 1: trends seen but no clear ‘thresholds’

57 57/59 flow time output input precipitation at time t precipitation at time t +  t precipitation at time t + 2  t 8. Impact of Precipitation Spatial Variability ‘filter’

58 58/59 ’06 Funding HOSIP Stage Topic AHPSWR1234 DMIP 20110 Parameterization: SSURGO/STATSGO 1000 Regionalized SAC-Snow Parameters 300 Auto Calibration: Arizona750 Auto Calibration: HL00 Snow-17 and HL-RDHM300 Large Area Simulation for WR products 031 Statistical Distributed300 VAR for Distributed Modeling 00 Spatial Variability00 DHM 2.0 AWIPS (HSEB)200?

59 59/59 Conclusions Distributed models are proper direction –Account for spatial variability: Parameterization Calibration Better results at outlets of some basins Amenable to new data sources –Scientifically supported flash flood modeling –New products and services

60 60/59 Development of Operational AWIPS Distributed Hydrologic Model Lee Cajina Ai Vo, Wen Kwock, Andreas Voellemy, Chris Dietz, Jon Roe Presentation to DOH Science Steering Team Mar 10, 2006

61 61/59 Goal Convert the HSMB prototype into an operational model by integrating Distributed Hydrologic Modeling (DHM) capabilities into the National Weather Service River Forecast System (NWSRFS); Provide a modeling environment that allows HSMB to transition new distributed hydrologic modeling science into operations

62 62/59 Concept of Operations Today (lumped) –RFCs use NWSRFS to setup and run hydrologic models 2X per day operationally and occasional ad hoc modeling (calibration) –Model runs execute on the DX/LX machines and cover the entire RFC area Future (lumped & distributed) –RFCs will use NWSRFS to setup and run distributed & lumped hydrologic models 2X per day operationally and occasional ad hoc modeling –Model runs will execute on the DX/LX machines –A fraction of the RFC area will be modeled using distributed and lumped models

63 63/59 Other Systems Examined To start, OHD examined 7 existing systems –LIS (NASA) –SME (UMD) –OMS (ARS/USGS) –GIS-RS (NOHRSC) Decided to further investigate OMS and GISRS by prototyping DHM functionality –OMS prototype work never started – informal agreement to look at it again later –GISRS prototype proved its existing operations concept is different than our existing approach (little to no human interaction) –SME (UMD) – MIKE-SHE (DHI) –ORMS (OHD) Review Criteria –Can we add modules (SAC, SNOW, etc.)? –Are there existing grid-based modeling capabilities? –Ease of software maintenance and enhancement

64 64/59 Development Strategy Using multiple AWIPS builds, iteratively develop model engine functionality at OHD Create a development environment capable of supporting multiple AWIPS builds of DHM. Each build may contain: –New Java code, –Existing C, Fortran, and C++ science modules, and –Modifications to legacy NWSRFS Fortran (most/all in first build) Use open source software engineering tools –Code Repository (Subversion) –Integrated Development Environment (Eclipse) –Testing Module/Sub-module testing (JUNIT) Feature Specification Tests (Fitnesse) Regression Tests – overall tests (shell scripts) –Automatically generated code to interface from C, C++, and Fortran to Java using JNI (SWIG)

65 65/59 Development Strategy (contin.) NWSRFS uses operations/operations table to execute hydrologic models; we chose to integrate into NWSRFS by defining a generic DHM operation (first version of DHM-OP uses distributed SAC-SMA and Kinematic Routing) –DHM-OP acts as an adapter or “bridge” from NWSRFS to Java Classes (model engine) –DHM-OP: starts Java Virtual Machine (JVM), initiates a Java process and passes model run information (i.e. run dates, time-series to write output to) from NWSRFS to Java model engine –DHM-OP uses a few Fortran modules to interface to NWSRFS, but all DHM logic and modeling schemes are defined in Java

66 66/59 Development Strategy (contin.) Shared File System NWSRFS Executable Java Native Interface (JNI)

67 67/59 DHM: First Set of Features Background –DHM 1.0 requirements gathering produced over 300 requirements that were categorized by: User (Research Scientist, RFC Hydrologist, Developer) Use Mode (real-time/forecast or ad-hoc modeling/calibration) Type of feature (Display/Edit grids, model computations, Time-series based features) –OHD would like to use existing AWIPS software where possible (D2D, GFE, IFP, ICP) –GFE development cannot start until OB8 at earliest –D2D is not flexible enough for ad-hoc modeling, potentially useful as a real-time diagnostic tool –ICP is being reworked into Java (no new functionality)

68 68/59 DHM: First Set of Features Results – Requirements were prioritized and an initial subset were chosen for first increment development Subset are organized into features containing about 2-3 weeks of development work –First increment focuses on features allowing RFC hydrologist’s to run DHM in real-time IFP – time series viewer D2D – grid data viewer –Any calibration/ad-hoc modeling will require using prototype software (HL-RDHM and XDMS)

69 69/59 Time series to write to Outlet point id DHM: First Set of Features Parametric DB Forecast DB 29 1 27 5 Operations Table 64 Input Output SAC-SMA Grids Processed Time-Series DB Time Series Viewer (IFP) Grid Viewer (D2D) Forecast Component Execution (FCST) New Operation (DHM-OP) Defines: Forecast Component Initialization (FCINIT)

70 70/59 DHM: Lumped and Distributed Operations Headwater Segments (no upstream segment) Segment A (All Lumped ) OLD some SMA (L) UHG (L) Segment B (All Distributed ) NEW SAC-SMA (D) Kinematic Routing (D) A 1 B D = Distributed (Grid) L = Lumped

71 71/59 DHM: Lumped and Distributed Operations Interior Segments (have an upstream segment) 2 B D = Distributed (Grid) L = Lumped A (All Lumped) OLD Segment A produces Hydrograph at 1 Hydrograph at 1 adjusted and passed to segment B Segment B + Routed hydrograph produces hydrograph at 2 Final hydrograph at 2 may include adjustments (Mixed) NEW Segment A (lumped or distributed) produces hydrograph at 1 Hydrograph at 1 adjusted and passed to segment B Segment B (distributed) + adjusted hydrograph at 1 produces hydrograph at 2 Final hydrograph at 2 may include adjustments 1

72 72/59 Data Issues Input (4 km XMRGs) –Model Parameter Grids –Initial Model State Grids –Hourly Observed Precipitation accumulations –6 hour Forecast Precipitation accumulations Input - cell to cell connectivity file containing HRAP coordinates for all outlets Output (4 km XMRGs) –Model State Grids (for future runs) Output (4km netCdfs for viewing in D2D) –SAC-SMA model states (% full) –Overland and Channel Flows **Model parameter and state grids are for SAC and Kinematic Routing

73 73/59 Verification During prototype/proof-of-concept phase (DHM 1.0), DHM results were included in a broader OHD verification project Data was inconclusive, but tended to show results consistent with previous findings –DHM is as good as lumped and sometimes better (dependent on basin/precip characteristics) At start of operational development, project handed off to RFCs

74 74/59 DHM: Next Set of Features ? (Steer Us!) Calibration tools to: –Run lumped/distributed models in calibration mode (i.e. integrating DHM into CALB/ICP) –generate routing parameters –Channel Based Method Estimation of Kinematic Routing Parameters –Interactively define outlet points –View grid and time series data –Edit parameter grids New science modules –Snow 17 –SAC including Frozen Ground –Muskingum Cunge Channel Routing –Mixing lumped and distributed models (Lumped Sac + Distributed Routing + Lumped Channel Routing, etc…) Real time modifications (MODS) –SACCO –SACBASEF –PRECIP

75 ABRFC Experiences with OHD HDMS1 Distributed Model in an Operational Environment William E. Lawrence March 10, 2006

76  ABRFC has implemented DM at 21 points; have to define point in order to output discharge time series  These points are distributed across our basin  Archiving hourly results since mid 2005  Exceptional drought conditions have limited case studies  ABRFC (Cooper) has performed extensive verification of DM simulation using StatQ program  Calibration methods/techniques are biggest areas of concern and uncertainty  Overall feeling is that model is still quite experimental and not yet “operational” ABRFC DM Highlights

77  Real Time timeseries viewable in IFP  Current setup has NOT resulted in long/excessive computational run times, roughly 20-30 seconds  Archived hourly time series accessible via DM version of arcfcstprog  No ability to make any interactive mods, so take what you get  GXSETS can produce RVF products, but yet to use operational due to lack of confidence in results and no mods functionality  Several individual case studies have been performed and results made available to staff and OHD. Drought is severely limiting in this area. ABRFC DM Results

78 IFP Plot showing DM timeseries

79

80

81

82 ABRFC DM STATQ Results  Compared performance of DM to NWSRFS lumped model  Statistics run from verification period from early 2002 through August 2005 using StatQ program  Only 14 points out of 21 had enough information to make comparisons  DM had better Correlations coeff on 8 of 14 basins  DM had better Overall percent bias on 7 of 14 basins  DM had better Peak Discharge on 8 of 14 specific events  NWSRFS had better time to peak on 11 of 14  Overall mixed results, but calibration uncertainties along with issues in routing likely the culprit

83 ABRFC DM Calibration  ABRFC originally tried using a priori values for SAC/SMA grids using STATSGO data  Results were generally poor, thus we tested scaling the a priori values, which yielded better results  ABRFC stopped work on calibration in late 2005 due to anticipated new a priori values from Univ of AZ project.  ABRFC also hoped that Univ of AZ would identify new calibration “strategies”  Have recently reconsidered decision to stop calibration due to new expected outcomes from Univ of AZ

84 ABRFC DM Calibration  ABRFC encountered a number of bugs with ICP software; it cannot overlay 1hour and 6 hour results, it does not display SAC contents generated by HDMS.  ICP does not allow for interactive modifications and testing of SAC paramters  ICP does not “read” the XMRG precip data or the HDMS generated basin-xmrg file for precip ts input  ICP does not read in gridded SAC SMA parameters, either raw or scaled values

85 85/59 Appendix A HDMS Statistical Analysis Data Basin2 year Flood Frequency (Normalizing Factor) Correlation Coefficient "R" Overall Percent Bias Mean Time to Peak Error (hr) Mean Time to peak ST Dev Mean Normalized PeakDischarge Error Mean Normalized Peak St Dev CVSA419800.75-23.68.83.7-0.410.36 ELMA454800.817.847.24.0-0.290.35 MLBA4211000.9217.745.6 0.010.18 SLSA4279000.7958.5111.610.3-0.070.27 SPRA47940.6916.093.22.9-0.070.56 SVYA4107000.88-0.025.76.7-0.230.44 TIFM7234000.955.074.97.4-0.030.14 CBNK189900.87-23.091.97.1-0.180.31 BLKO2224000.86-29.7113.118.3-0.240.26 BLUO287700.8214.788.817.0-0.030.19 ELDO2162000.920.26-1.413.0-0.180.34 KNSO235800.897.981.55.30.020.22 TALO2203000.9438.82-211.50.080.17 WSCO212800.83-21.463.52.60.190.34 WTTO2207000.8860.699.311.10.010.21 ELTT226100.5941.7-1.739.30.070.21 ELTT2a 7.215.00.010.15 BSGM750200.90.342.34.7-0.050.20 INCM739800.9420.03-0.711.10.090.41

86 86/59 Appendix B NWSRFS Statistical Analysis Data Basin Correlation Coefficient “R” Overall Percent Bias Mean Time to Peak Error (hr) Mean Time to peak ST Dev Mean Normalized Peak Discharge Error Mean Normalized Peak St Dev CVSA4N/A ELMA40.63-22.43-2.8 3.8-0.130.4 MLBA40.79-3.83-2.7 5.8-0.020.15 SLSA40.82*-32.99*-2.6* 4.5*–0.14*0.23 SPRA4N/A SVYA40.790.512.0 7.0-0.080.12 TIFM70.87-20.281.6 8.5-0.060.17 CBNK10.840.83-3 8.2-0.120.34 BLKO20.9-15.3-1.2 14.1-0.120.18 BLUO20.85-47.29-6.2 17.9-0.070.15 ELDO20.78-28.22-3.6 9.0-0.240.38 KNSO20.86-4.46-4 8.30.020.26 TALO20.95*-27.96*-2.0* 14.1* – 0.12* 0.12* WSCO20.52*-41.34*-5.2* 5.0* – 0.14* 0.18* WTTO20.96*-22.49*-1.4* 7.8* – 0.08* 0.10* ELTT20.6856.5-6 39.10.020.15 ELTT2a 2.7 16.90.070.21 BSGM70.65†-70.42†-6† 0.7†0.15†0.05† INCM70.70†-68.97†0† –0.29†0.24†

87 87/59 Appendix C Adj. NWSRFS Statistical Analysis Data BasinAdj Mean Time to Peak Error (hr) Adj. Mean Time to peak ST Dev Adj. Mean Normalized Peak Discharge Error Adj. Mean Normalized Peak St Dev CVSA4N/A ELMA4-2.22.7 -0.270.51 MLBA4-1.55.7 -0.040.16 SLSA4-3.9*7.3* – 0.16* 0.24* SPRA4N/A SVYA42.76.6 -0.150.26 TIFM7-0.68.5 -0.070.18 CBNK1-3.37.6 0.140.35 BLKO2-0.913.6 0.120.18 BLUO2-5.718.1 0.080.16 ELDO2-3.78.2 0.320.42 KNSO2-3.68.2 0.030.3 TALO2-1.9*15.0* –0.12* 0.12* WSCO2-3.1*5.6* – 0.27* 0.30* WTTO2-1.4*8.0* –.12* 0.11* ELTT2-4.939.0 0.010.15 ELTT2a3.916.3 0.030.13 BSGM7-5.5†0.7† –0.16†0.06† INCM71.5†3.5† –0.33†0.28†

88 88/59 Reference Information for Appendices A, B and C. Appendix A, B and C are summaries of selected statistical parameters. The Correlation Coefficient “R”and Percent Bias were derived from the multi-year time series analysis. The HDMS simulation was compared to the one hour observed and the NWSRFS simulation was compared to the six hour observed discharge. The NWSRFS “Adj.” information is a comparison of the six hour NWSRFS simulations to the one-hour instantaneous discharge time series. The ELTT2a Peak Error averages, excludes 2 events which both models performed very poorly. Note: “†” indicates the NWSRFS multi-year analysis began in March 2005, and “*” indicated the analysis began in the summer of 2003.) Elsewhere, the multi-year period was from April 2002 through August 2005. The basins shaded in Pink, the Annual Peak Discharge’s period of record is less than 10 years. So the accuracy of the 2-year frequency peak discharge normalization factor is suspect.

89 89/59 Distributed Hydrologic Modeling Project Presented by: Paul McKee West Gulf River Forecast Center WGRFC operational research and development

90 90/59 Background June 2003 – WGRFC requested to test HLRMS from OH Fall 2003 – DHMS installed and local workshop trained personel. Spring 2004 – Began test basin setup and calibration Feb 2004 – Begin Archiving DHMS forecast timeseries Since 2004 – Providing feedback to OH and detailing requirements for an operational DHMS (OSIP process)

91 91/59 2004 Review/ Initial Conclusions 8 basins setup to test calibration strategies and lumped model comparisons Timeseries archived at OH for verification Visual inspection promising Rainfall activity that year provided several events for evaluation

92 92/59 Current activities and status Development and calibration of basin models Verification and validation of models Implementation into real-time river forecast operations Since 2004, 18 basins setup and testing calibrations… total of 26 19 basins available for operational forecasting

93 93/59 Distributed Modeling Advantages Research indicates the greatest improvement occurs for basins with: –Non-uniform rainfall distributions –Irregular shaped basins (Long and narrow) –Non-uniform soil type and land use –Relatively large impervious areas which cause a rapid surface runoff response  Increased accuracy of event timing  Stream flow prediction at interior points  Distributed parameter inputs utilizes more data complexity as available

94 94/59 Basin Response Times Hydrologic Response Times  6 hours or less 11.5% (ABRFC); 20.4% (WGRFC) Basins  12 hours or less 49% (ABRFC); 46.5% (WGRFC) Basins  18 hours or less 72% (ABRFC); 65.3% (WGRFC) Basins  24 hours or less 85% (ABRFC); 74.1% (WGRFC) Basins

95 95/59 Strategy for basin development Varied basin size, terrain, landuse/cover, soils – (DA: 75-400mi2; peak times: 6-60hr) VAR study basins –Utilize estimated SAC parameters from ab_opt –Another data set for comparison Nested basins –Forecast points with interior stream gages

96 96/59 DHMS Test Basin Locations

97 97/59 Strategy for Basin Calibration Approach similar to lumped model Manual “expert” process; parameter estimation/ optimization tools unavailable Use ab_opt estimated SAC parameters for scalar adjustments Simulation comparisons: –Apriori, ab_opt, “expert” calibration, lumped

98 98/59 Limitations of DHMS Calibration Global scalar adjustment of parameter grids; conserves relative diff. between grid cells Lumped values only for PCTIM, ADIMP, RIVA Unknown effect of apriori grid outliers on calibration results (ie. sensitivity) Difficult to keep simple… build complexity as needed

99 99/59 Calibration Sensitivities?? Possible outliers in apriori param grids? Large relative differences of grid values? LZFPM apriori grid QPE error, both location and amt. Grid resolution vs. available data? Outlier?

100 100/59 Limitations of DHMS Calibration (operational tools and utilities) XDMS – 1 st generation, display/ no editing of parameter grids Stat_q – text output, no graphics Parameter Estimation/ Optimization tools needed –Enhances expert calib. –Automated parameter sensitivity anal. –Graphics of statistical analyisis

101 101/59 Basin Studies

102 102/59 Study Basin: KNLT2 semi-regular shape, fast response steep slope (0.0274) drainage area: 346 mi2 avg. time to peak: 7 hrs 2 interior stream gages –OXDT2 (147 mi2) –SNBT2 (155 mi2)

103 103/59 KNLT2time periodRMS (CMS) R DHM Apriori 10/1/97-12/31/0315.080.77 DHM Calibrated 1/1/96 – 12/31/049.690.80 Lumped (6 hr) 1/1/96 – 12/31/049.810.86 KNLT2 Calibration

104 104/59 KNLT2: Apr 2004 TS investigating nested basins, interior points DHM OBS OXDT2 SNBT2 KNLT2 US DS

105 105/59 KNLT2: Apr 2004 WY OBS DHM LMP

106 106/59 KNLT2: Nov 2001 TS investigating nested basins, interior points DHM OBS

107 107/59 KNLT2: Jun 2004 TS investigating nested basins, interior points DHM OBS

108 108/59 KNLT2: Jun 2004 WY OBS DHM LMP

109 109/59 KNLT2: Jun 2002 WY OBS DHM LMP

110 110/59 Study Basin: SOLT2 Irregular shape, slow response Mild slope (0.0013) Drainage area: 336 sq. mi. Avg. time to peak: 48-60 hrs

111 111/59 SOLT2time periodRMS (CMS) R DHM Apriori 1/1/96-12/31/0319.550.86 DHM Calibrated 1/1/96-12/31/0324.210.84 Lumped (6 hr) 10/1/00 – 9/30/0417.820.91 SOLT2 Calibration

112 112/59 SOLT2: Feb 2002 TS DHM OBS LMP

113 113/59 SOLT2: Feb 2002 WY DHM OBS LMP

114 114/59 SOLT2: Jun 2004 TS DHM OBS LMP double peak *notice 2 separate areas of heavy rainfall

115 115/59 SOLT2: Jun 2004 WY DHM OBS LMP

116 116/59 SOLT2: Nov 2003 TS DHM OBS LMP

117 117/59 SOLT2: Dec 2002 TS DHM OBS LMP

118 118/59 SOLT2: Dec 2002 WY DHM OBS LMP

119 119/59 Preliminary Study Conclusions Benefits of DHM at WGRFC Timing of rising limbs well-simulated (variety of DAs and spatially distributed QPE) Outperforms lumped model for irregulary shaped basins Full utilization of gridded QPE

120 120/59 Preliminary Study Conclusions Questions/Concerns of DHM at WGRFC Difficult to calibrate peak flows –As DA decreases, %error increases (ie. QPE)? –QPE error increase false peaks and compounds peak flow errors SAC model error compounds for each grid cell (diffused with lumped) Gridded data for all parameters may be too much complexity (ie. zones?) QPE most sensitive parameter… spatial and magnitude errors explain false peaks and peak flow errors

121 121/59 Expected Effect of Data Errors and Modeling Scale Relative Sub-basin Scale A/A k 110 100 10 15 20 25 30 0 5 Relative error, Ek, % (lumped) (distributed) Noise 0% 25% 50% 75% Data errors (noise) may mask the benefits of fine scale modeling. In some cases, they may make the results worse than lumped simulations. Simulation error compared to fully distributed ‘Truth’ is simulation from 100 sub- basin model clean data Graphic courtesy of Mike Smith, OHD

122 122/59 Integrating DHM into operations Forecast Mode Runs once per hour on cron after MPE run… no operational mods applied Generates QINH timeseries in processed database. View DMS and lumped simulations together in IFP… ensemble? Forecast issued using TSCHN mod tracing DHM simmulation

123 123/59 Integration to IFP DHM OBS LMP

124 124/59 Model Application Spectrum hypothetical use within WGRFC operations LumpedDHM Influencing factors basin type basin response basin shape rainfall distribution flow volume headwater fast irregular non-uniform small mainstem slow regular uniform large model ensemble tool?

125 125/59 Operational Forecast Conclusions DHMS has great application for WGRFC. Generally performs as well or better than lumped model for headwater basins. Mainstem river basins have not been tested… believe that lumped model will out-perform DHM for mainstem river basins (ie. channel routing and SAC model errors) Enhance software is needed to fully integrate DHM into RFC operations

126 126/59 Where Do We Go From Here?  RFC Level  Continue working with OH on requirements for DHM operational software  Continue setup and calibration of test basins (ie. hill country, urban areas). Evaluate forecasts compared to lumped models.  Investigate ability to forecast streamflow at interior points  Investigate methods to apply auto-calibration algorithms to current DHM to enhance expert calibration process

127 127/59 Contacts Please contact the following people with any questions relating to DMS: –OHD Team : Seann ReedSeann.Reed@noaa.gov Lee CajinaLee.Cajina@noaa.gov –WGRFC Team: Bob CorbyRobert.Corby@noaa.gov Paul McKeePaul.Mckee@noaa.gov Mike ShultzMike.Shultz@noaa.gov

128 128/59 Extra slides

129 129/59 Model Comparison Summary Lumped Model –Uses 6-hour time step –MAP computed; assumes uniform rainfall across the basin –Runoff applied to a unit hydrograph for the basin –Uses single SAC-SMA parameter across entire basins –Peak flow can be missed at basins that crest in less than 6 hours Distributed Model –Uses 1-hour time step –Uses 4km x 4km grids –Uses gridded QPE –SAC-SMA parameters estimated (i.e. soil type, vegetation type, land use, slope, etc.) for each grid cell –Hydrologic simulations computed using the kinematic wave technique

130 130/59 Understanding sources of error Gridded data sets QPE spatial and magnitude errors neighboring grids SAC paramX2X QPE location ZerrZ QPE amount 2A Relative differences b/t grids QPE mis-located where SAC param is half size QPE over-est by double error source


Download ppt "1/59 HL Distributed Hydrologic Modeling Mike Smith Victor Koren, Seann Reed, Ziya Zhang, Fekadu Moreda, Fan Lei, Zhengtao Cui, Dongjun Seo, Shuzheng Cong,"

Similar presentations


Ads by Google