Presentation is loading. Please wait.

Presentation is loading. Please wait.

James VanShaar Riverside Technology, inc

Similar presentations


Presentation on theme: "James VanShaar Riverside Technology, inc"— Presentation transcript:

1 Investigation of Complex River System Operational Policy – Modeling Obstacles and Solutions
James VanShaar Riverside Technology, inc. (TVA Flood Control Operations EIS Model) At RTi, we’ve had the pleasure over the last-year-and-a-half of assisting TVA in their investigation of their reservoir operation policy. Today I’d like to discuss our RiverWare modeling efforts. Lead in – About 2 years ago, the TVA board of directors . . .

2 RESERVOIR OPERATIONS STUDY
Background Purpose To determine if changes in reservoir system operating policies could create greater overall public value Tie in — About two years ago, the TVA board of directors decided to determine if changes in the reservoir system operating policies could create greater overall public value Lead in – Many of the dams built by TVA were multi-purpose from the beginning.

3 RESERVOIR OPERATIONS STUDY
Background Purpose To determine if changes in reservoir system operating policies could create greater overall public value System Integrated system provides multiple benefits Trade-offs create competing demands for use of water Stakeholders have different views on priorities Since then, TVA has been called upon to operate the reservoirs in an integrated fashion to provide a wide variety of public benefits. Some times, as we all know, the benefits don’t always support one another. Trying to manage the resources results in complex trade offs which must be weighed and measured. Depending on who you talk with, any one benefit is more or less important than the others. Lead in— To achieve our purpose, therefore,

4 RESERVOIR OPERATIONS STUDY
Background Purpose To determine if changes in reservoir system operating policies could create greater overall public value System Integrated system provides multiple benefits Trade-offs create competing demands for use of water Stakeholders have different views on priorities Plan Two-year Reservoir Operations Study initiated Any and all uses of the water that flows through the reservoir system and all aspects of the current operating policies NO Holds Barred! A two-year reservoir operations study was initiated. This study would review the current operation policy in terms of priorities and achieved benefits. Anyone could suggest changes that they believe would increase the overall public value of the system. Certainly, these proposals would come from individuals or groups with a special interest and would attempt to increase the value achieved in that area of interest. Lead in- The value of the current system and a alternative system, would be determined through investigation into the generalized areas of benefits provided.

5 RESERVOIR OPERATIONS STUDY
Background Issues Flood risk Water quality Economic Environmental Cultural Navigation Water supply Recreation (reservoir and downstream) Hydropower and non-hydropower generation Public values on the use of water Support of other federal agencies Here you can see the various benefit objectives that the TVA addresses. For a given alternative operational policy, each of these areas will be considered. The proposed policy may be rejected if it is found to be too damaging to any one objective. If the policy causes redistribution of benefits between these objectives, someone will have to decide what to do. Ideally, a proposed policy would improve TVA’s ability to address all of these areas. Realistically, even a policy that increases the overall benefits reaped, will also redistribute benefits between these objectives. Our corner of the environmental impact study is, therefore, to investigate the effects of proposed policy change on the flood risk at various points in the valley. Lead in— To accomplish this, a RiverWare model was developed that could simulate the effects of an operational policy—starting with the current or base case policy.

6 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps The project prime, Michael Baker Corporation, and another sub-contractor, AMEC, analyzed the historical record and prepared estimated local flows at 55 points throughout the basin. These flows, 99 years at a 6 hour timestep, drive the RiverWare model through 144 thousand timesteps. Lead-in To give you a sense of the magnitude . . .

7 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps Suppose you put 144 thousand one dollar bills end to end. You would have figured out a very expensive way to travel 12 ½ miles! No really, if we gathered all the TVA river schedulers and somehow managed for them to schedule all the reservoirs in the system for each successive 6 hour period at the rate of one timestep every five minutes, it would take 500 days. Lead in—

8 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps 36 dams and 14 damage centers You see, we are looking at 36 dams and 14 damage centers to consider. A computer, solving one timestep each second, would take 40 hours. Lead in— Because historical record can only suggest what might come in the future,

9 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps 36 dams and 14 damage centers 69 historic storms scaled 1.5x, 2.0x and 2.5x additional analysis to extend what might happen was needed. From the historical record, 69 storms were identified for scaling. By scaling and re-simulating them, the storms potentially provide possible rainfall-runoff scenarios that will push any policy to its limit. Lead in— The analysis of the simulation results in terms of flood risk forms the basis against which alternative policies will be compared.

10 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps 36 dams and 14 damage centers 69 historic storms scaled 1.5x, 2.0x and 2.5x Lather. Rinse. . . So, we then mix up the model to represent another operational policy, and we simulate the model for the same local flow data. Lead in— And we repeat

11 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps 36 dams and 14 damage centers 69 historic storms scaled 1.5x, 2.0x and 2.5x Alternative Scenarios Modify for alternative operational policy Repeat for 5+ alternative operational policies. this for some number of possibly-feasible operational policy scenarios. So far we’ve completed 5. Lead-in To give you a sense of the magnitude . . .

12 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps 36 dams and 14 damage centers 69 historic storms scaled 1.5x, 2.0x and 2.5x Alternative Scenarios Modify for alternative operational policy Repeat for 5+ alternative operational policies. we’ve created / produced over 105 thousand files in the carefully automated process, approximately 15 Gigabytes of data, some of which is compressed. If you printed all this out, at a conservative 12 point font. . . Just kidding.

13 RESERVOIR OPERATIONS STUDY
Background Base Case Simulation 99 years at 6 hour timestep: ~144k timesteps 36 dams and 14 damage centers 69 historic storms scaled 1.5x, 2.0x and 2.5x Alternative Scenarios Modify for alternative operational policy Repeat for 5+ alternative operational policies. Analysis Extract seasonal and annual peak flow / pool / stage Compare Alternatives against Base Case If necessary, combine / revise alternatives. Repeat. To analyze the results of the simulations first, we extract seasonal and annual peak flow, pool and / or stage second, the alternatives are compared with the base case and third, additional alternative policy scenarios are developed combining the best of the previous scenarios.

14 Model Design Major Concerns
Run-time Model size Accuracy of policy representation Decision tracking: debugging, calibration, reproduction Extensibility to alternatives In designing the simulation and analysis a number of issues were of great concern. Number 1, if we weren’t careful, we could have the model simulating for days and days. Number 2, a study of this magnitude would push the limits of any desktop machine Number 3, we wanted to ensure that our model was sufficiently true to the decision making process the river schedulers follow Number 4, The complexity ensured that we could easily bury ourselves with development errors and calibration questions about why the model acted the way it did. Number 5, we knew we would be revising the model and needed the flexibility to do that without having to re-traverse the base case development process. Lead in— With invaluable input from Brad Vickers, who also partnered with us in the initial stages of the study,

15 Model Design Power production rule set
Generic Tributary Algorithms Applied to virtually all non-sloped power reservoirs Foundation of all operation policy Quarantined deviation code for non-conformist projects we determined that a generic operational policy should be developed, then tweaked as necessary. Beginning with the tributary policy, a generic algorithm was developed based on the considerations accounted for at nearly all TVA tributary reservoirs, whether they were headwater reservoirs or not. The basic functions developed would provide the foundation for all operation policy. Once developed and tested on a number of tributary reservoirs additional functionality could be added, carefully, to address the variations from the generic policy that occur at various dams. Incidentally, user-defined sub basins were used heavily as parameters in the application of the generic and deviation policy Lead-in With simulation of the tributaries well under way . . .

16 Model Design Power production rule set
Mainstem Fixed Rule (sloped-power reservoir) Acceptable discharge vs. pool elevation operational points we were ready to address the operations of the mainstem reservoirs. After a number of discussions with TVA it was determined that the simulation operation policy might be simplified significantly while preserving the integrity of results. That is, we might be able to boil down mainstem reservoir policy into a dynamic curve which specifies acceptable points of outflow vs. headwater elevation. This took some serious rule coding to implement, but turned out to be a major breakthrough! Lead in – Eventually, this dynamic fixed rule curve, which is a bit of an oxymoron—it is dynamic from timestep to timestep, but fixed for a given timestep’s decision—was further modified.

17 Model Design Power production rule set
Mainstem Fixed Rule (sloped-power reservoir) Acceptable discharge vs. pool elevation operational points Recovery mode Fixed rule curve abandonment A recovery mode was added to simulate the river scheduler’s willingness to release more for a given pool elevation, following an event peak. And just when we thought everything was about ready for production runs, the scaled inflows proved that we needed to add logic simulating a river scheduler’s ability to foresee conditions warranting abandonment of the fixed rule curve. If things were likely to get very severe—that is where concerns about the top of the dam would govern all action—the curve went out the window and high pool elevations were no longer required before large outflows were prescribed. Lead in— The generic tributary rules and the mainstem fixed rule algorithms were organized into one rule set.

18 Model Design Power production rule set
This ruleset, called the power production rule set, provides all the policy needed to operate 3 of the 4 models that combined represent the entire river system. The other model, the non-power model, runs using a simplified derivative of this set. Activation or deactivation of certain policy groups eliminate unnecessary logic, for a given model. Lead in— The policy rule set as designed and implemented accomplishes a number of things, besides simple representation of policy.

19 Model Design Power production rule set
Results of rule set design Carefully tested, compact, reused code base Eliminated re-firing of rules Decision variables stored Limited re-solution of objects Individual policy relegated to parameters, not logic Number 1, it focused all policy into a carefully tested, compact, reusable code base. Number 2, the rules were written and controlled to eliminate their re-firing. Only in this way could we hope to track the detailed interaction of rules and control decision making run time. Number 3, the rule logic preserved the values of key decision variables used in arriving at a prescribed reservoir outflow on data objects for review and debugging Number 4, values were placed on simulation objects only as absolutely necessary and then as many at a time as possible, thus limiting re-solution run time. Number 5, application of operation policy was defined by parameters, not 31 flavors of the “follow the guide curve” function. Code could be trusted. Alternatives could be run without the difficulty of defining additional rule logic and testing and debugging it through all the possible scenarios in which it might be applied.

20 Model Application System Segmentation
Space Even with the ingenuity applied in rule set development, the complexity and size of the system suggested or required that it be segmented. First, as I alluded to earlier, the Tennessee River System was broken into 4 models, by geography, operations and inter-dependencies. Lead in- Here we see . . .

21 Model Application System Segmentation
Upper Tributary Non-Power Tributary Upper Mainstem Lower Mainstem The four models: The Non-Power, the upper tributary, the Upper mainstem and the lower mainstem models. The Upper Tributary model contains all the tributary power reservoirs whos’ operation does not depend on mainstem conditions. The Upper Mainstem reservoir includes all tributary power reservoirs whos’ operation depends on mainstem conditions plus the mainstem reservoirs required for the tributary operation. The Non-Power Tributary model’s name explains things sufficiently. The Lower Mainstem reservoir includes mainstem reservoirs whos’ operations are not inter-dependent with any tributary operations.

22 Model Application System Segmentation
Space Four Models Reuse of power rule set Time In addition to segmentation in space, we segmented in time. By time, I mean we may run a series of 10 year runs to complete all 99 years of simulation. To make this happen, careful handling to model periods and states is necessary. You can’t provide too many or too few initial conditions. Certain decision variables must be carried over, for non-constant periods of time, from one model to the next to ensure seamless continuity. You need to ensure that your model runs a little longer that your archival period, to ensure that the initial conditions for the next model period are available and untarnished by end of run period concerns. We really tried to avoid having to segment in time. Ultimately, it wasn’t a hardware limitation but a couple of RiverWare bugs that forced us to segment in time. Incidently, these bugs have since been fixed. Nevertheless, resigning ourselves to the necessity of segmentation in time and building the structure to accomplish it really turned out to be a boon to us in the end. Lead in— Now, think about 4 models, each running some number of times to complete 99 years of data, plus many design storms. It doesn’t take long to realize that you’ve got to establish some methodology to run and track it all. Our control structure begins with a piece of software we call TSTool.

23 Model Application Control and Data Management
TSTool allows you to easily access, manipulate, view and analyze one or more time series. It operates either interactively or in batch mode. For the EIS, it was used extensively in preparation of the original 99 years of data. It was also used closer to the actual modeling to prepare initial state and driving time series data, as well as post-processing and time series archival work. Essentially, the flexible program provided 80% of what is normally included in input and output dmi executables. Lead in— The other 20%, and the Senior controller of the process . . .

24 Model Application Control and Data Management
is a perl script. The script controls the entire simulation process, which I will describe shortly, and is actually what is run to get the simulation for a given model for all 99 years and design storms for a given alternative. This perl script was executed using a freely available ActiveState windows compatible perl interpreter. Lead in— Flow of data in and out of the various RiverWare models, for each scenario and run period . . .

25 Model Application Control and Data Management
required careful consideration in terms of temporary and permanent data and model storage. This directory structure design, extensible for any number of alternatives provides space for driving data, archived time series, temporary storage of input states and output results, log files and models. Lead in— With these pieces in place. . .

26 Model Application Control and Data Management
Control Algorithm: For each successive run period-- Modify TSTool and RiverWare batch control files Run TSTool initialization commands Access archived data Locate RiverWare input in expected directory Run RiverWare using its control file Import data Simulation Export data Save model with new name Run TSTool archival commands Store results in archive time series files the loops in the perl script are run. Lets say our first run period goes from October 1902 thru September To accomplish this, TSTool and RiverWare batch control files are modified based on the dates, the level of local inflow and the alternative. Batch calls to TSTool, using the modified control files access archive data to prepare states to start Oct. 1st, 1902 and the local inflows to run the entire period. After the states and driving data are in the expected directory, a batch call to RiverWare imports the data, runs the simulation, exports data into a temporary results directory and saves the model file with results to a model archival directory. Another call to TSTool stores the results in an archive timeseries, from which a run beginning Oct 1912 would get its intial states.

27 Model Application Control and Data Management
Design Storms Apply revised control algorithm for each storm Revision includes consideration for Appropriate initial data Storage location of new archival data Once the 99 years are complete, the special storm periods are run. This is accomplished using virtually the same algorithm, with extra logic to ensure the appropriate states, driving data and archival locations are used. Any failures in simulation or data processing are caught and relayed to where I can monitor them from anywhere with an internet connection. The perl script analyzes output from the models and posts certain information to our ftp site if failure occurrs. This way I can check to ensure my model is still running before going to bed, without going to the office. We couldn’t afford to waste overnight computational time. Eventually, RealVNC software was applied to allow control of my windows machines from home. Lead in— While the approach was developed begrudgingly because of disappearing RiverWare applications, great things have resulted.

28 Model Application Results of Approach
Flexibility Debugging Event isolation Run-time Consistency throughout alternatives Built-in archival of runs / models / decisions Elimination of model size concerns The control algorithm provides incredible flexibility for debugging and isolation of a simulated event. Without it, we often had to run some number of years from a known starting date before we would arrive at the point where a policy feature became active and could be tested or debugged. Without it, the scaled design storm work was almost beyond our grasp. We would have had to run a continuous 99 years with the scaled design storms embedded in the inflow timeseries’ and subsequently scrutinize each project’s states immediately prior to the storm’s beginning to ensure independence of events. The segmentation of space and time allowed us to achieve more than 25% improvement in model run time. We can run upstream models for a few years and start downstream models on another machine at the same time. We didn’t have to run 99 years upstream before starting downstream. The scripts and command files ensure that each alternative is treated identically. The data, the models and the decision variables are stored for subsequent review when TVA or another stakeholder wonders “why did this happen?” Model size became a non-issue. It would be very easy to break it into smaller run period pieces, if necessary.

29 Alternative Scenarios Alternative Operational Scenario Flood Frequency and Damage Curves
Alternative X Regulated Dollars of Damage I’ve spent most of this presentation time reviewing the what and how of our modeling effort. To give you a sense of where this was all headed, here you can see an example of what results might look for some Alternative Scenario X. Hopefully the plot would provide understanding of likely impacts of alternative X on flood frequency and expected flood damages to those who must decide whether and how to change operation policy. In the end, RTi has developed models and an environment that has allowed the Flood Risk portion of the TVA EIS to proceed and meet or exceed all of TVA’s expectations. You can imagine how rewarding it was to be delivering modeling results and hear the Manager or River Scheduling at TVA, Greg Lowe, express his appreciation and tell you that he honestly didn’t think what he asked us to do could be done. We look forward to many more successful partnerships with TVA, and other groups and agencies in applying RiverWare. Percent Exceedance

30 Thank you for your time and attention.
Conclusion Thank you for your time and attention. Any Questions?

31 Thank you. Fall Creek Falls, TN


Download ppt "James VanShaar Riverside Technology, inc"

Similar presentations


Ads by Google