Download presentation
Presentation is loading. Please wait.
Published byArabella Bell Modified over 9 years ago
1
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University http://www.castlelab.princeton.edu © 2012 Warren B. Powell, Princeton University © 2012 Warren B. Powell
2
Outline From stochastic search to dynamic programming From dynamic programming to stochastic programming © 2012 Warren B. Powell
3
From stochastic search to DP Classical stochastic search »The prototypical stochastic search problem is posed as where x is a deterministic parameter, and W is a random variable. »Variations: Expectation cannot be computed Function evaluations may be expensive Random noise may be heavy-tailed (e.g. rare events) Function F may or may not be differentiable. © 2012 Warren B. Powell
4
From stochastic search to DP Imagine if our policy is given by Instead of estimating the value of being in a state, what if we tune to get the best performance? »This is known as policy search. It builds on classical fields such as stochastic search and simulation- optimization. »Very stable, but it is generally limited to problems with a much smaller number of parameters. © 2012 Warren B. Powell
5
From stochastic search to DP The next slide illustrate experiments on a simple battery storage problem. We developed 20 benchmark problems which we could solve optimally using classical methods from the MDP literature. We then compared four policies: »A myopic policy (which did not store energy) »Two policies that use Bellman error minimization: LSAPI – Least squares, approximate policy iteration IVAPI – This is LSAPI but using instrumental variables »Direct – Here we use the same policy as LSAPI, but use directly policy search to find the regression vector. © 2012 Warren B. Powell
6
From stochastic search to DP Performance using Bellman error minimization (light blue and purple bars) © 2012 Warren B. Powell
7
Optimal learning Now assume we have five choices, with uncertainty in our belief about how well each one will perform. If you can make one measurement, which would you measure? 1234 5 © 2012 Warren B. Powell
8
Optimal learning Policy search process: »Choose »Simulate the policy to get a noisy estimate of its value: © 2012 Warren B. Powell “a sample path”
9
Optimal learning At first, we believe that But we measure alternative x and observe Our beliefs change: Thus, our beliefs about the rewards are gradually improved over measurements i j i j i j © 2012 Warren B. Powell
10
Optimal learning 1234 5 No improvement Now assume we have five choices, with uncertainty in our belief about how well each one will perform. If you can make one measurement, which would you measure? © 2012 Warren B. Powell
11
Optimal learning 1234 5 New solution The value of learning is that it may change your decision. Now assume we have five choices, with uncertainty in our belief about how well each one will perform. If you can make one measurement, which would you measure? © 2012 Warren B. Powell
12
Optimal learning An important problem class involves correlated beliefs – measuring one alternative tells us something other alternatives. 1234 5...these beliefs change too. measure here... © 2012 Warren B. Powell
13
Optimal learning with a physical state The knowledge gradient »The knowledge gradient is the expected value of a single measurement x, given by »Knowledge gradient policy chooses the measurement with the highest marginal value. Knowledge state Implementation decision Updated knowledge state given measurement x Expectation over different measurement outcomes Marginal value of measuring x (the knowledge gradient) Optimization problem given what we know New optimization problem © 2012 Warren B. Powell
14
The knowledge gradient Computing the knowledge gradient for Gaussian beliefs »The change in variance can be found to be »Next compute the normalized influence: »Let »Knowledge gradient is computed using 0 Comparison to other alternatives © 2012 Warren B. Powell
15
After four measurements: »Whenever we measure at a point, the value of another measurement at the same point goes down. The knowledge gradient guides us to measuring areas of high uncertainty. Optimizing storage Measurement Value of another measurement at same location. Estimated valueKnowledge gradient New optimum © 2012 Warren B. Powell
16
Optimizing storage After five measurements: Estimated valueKnowledge gradient After measurement © 2012 Warren B. Powell
17
Optimizing storage After six samples Estimated valueKnowledge gradient © 2012 Warren B. Powell
18
Optimizing storage After seven samples Estimated valueKnowledge gradient © 2012 Warren B. Powell
19
Optimizing storage After eight samples Estimated valueKnowledge gradient © 2012 Warren B. Powell
20
Optimizing storage After nine samples Estimated valueKnowledge gradient © 2012 Warren B. Powell
21
Optimizing storage After ten samples Estimated valueKnowledge gradient © 2012 Warren B. Powell
22
After ten samples, our estimate of the surface: Optimizing storage Estimated valueTrue value © 2012 Warren B. Powell
23
From stochastic search to DP Performance using direct policy search (yellow bars) © 2012 Warren B. Powell
24
From stochastic search to DP Notes: »Direct policy research can be used to tune the parameters of any policy: The horizon for a deterministic lookahead policy The sampling strategy when using a stochastic lookahead policy The parameters of a parametric policy function approximation »But, there are some real limitations. It can be very difficult obtaining gradients of the objective function with respect to the tunable parameters. It is very hard to do derivative-free stochastic search with large numbers of parameters. This limits our ability to handle time-dependent policies. © 2012 Warren B. Powell
25
Outline From stochastic search to dynamic programming From dynamic programming to stochastic programming © 2012 Warren B. Powell
26
From DP to stochastic programming The slides that follow start from the most familiar form of Bellman’s optimality equation for discrete states and actions. We then create a bridge to classical formulations used in stochastic programming. Along the way, we show that stochastic programming is actually a lookahead policy, and solves a reduced dynamic program over a shorter horizon and a restricted representation of the random outcomes. © 2012 Warren B. Powell
27
From DP to stochastic programming All dynamic programming starts with Bellman’s equation: All problems in stochastic programming are time- dependent, we write it as We cannot compute the one-step transition matrix, so we first replace it with the expectation form: © 2012 Warren B. Powell
28
From DP to stochastic programming Implicit in the value function is an assumption that we are following an optimal policy. We are going to temporarily assume that we are following a fixed policy We cannot compute the expectation, so we replace it with a Monte Carlo sample. We use this opportunity to make the transition from discrete actions a to vectors x. © 2012 Warren B. Powell
29
From DP to stochastic programming The state consists of two components: »The resource vector that is determined by the prior decisions »The exogenous information. In stochastic programming, it is common to represent exogenous information as the entire history (starting at time t) which we might write as While we can always write, in most applications is lower dimensional. »We will write, where we mean that has the same information content as. © 2012 Warren B. Powell
30
From DP to stochastic programming We are now going to drop the reference to the generic policy, and instead reference the decision vector indexed by the history (alternatively, the node in the scenario tree). »Note that this is equivalent to a lookup table representation using simulated histories. »We write this in the form of a policy, and also make the transition to the horizon t,…,t+H Here, is a vector over all histories. This is a lookahead policy, which optimizes the lookahead model. © 2012 Warren B. Powell
31
From DP to stochastic programming We make one last tweak to get it into a more compact form: »In this formulation, we let »We are now writing a vector for each sample path in. This introduces a complication that we did not encounter when we indexed each decision by a history. We are now letting the decision “see” the future. © 2012 Warren B. Powell
32
From DP to stochastic programming When we indexed by histories, there might be one history at time t (since this is where we are starting), 10 histories for time t+1 and 100 histories for time t+2, giving us 111 vectors to determine. When we have a vector for each sample path, then we have 100 vectors for each time t, t+1 and t+2, giving us 300 vectors. When we index on the sample path, we are effectively letting the decision “see” the entire sample path. © 2012 Warren B. Powell
33
From DP to stochastic programming To avoid the problem of letting a decision see into the future, we create sets of all the sample paths with the same history: We now require that all decisions with the same history be the same: »These are known in stochastic programming as nonanticipativity constraints. © 2012 Warren B. Powell
34
From DP to stochastic programming A scenario tree »A node in the scenario tree is equivalent to a history © 2012 Warren B. Powell
35
From DP to stochastic programming This is a lookahead policy that solves the lookahead model directly, by optimizing over all decisions over all time periods at the same time. Not surprisingly, this can be computationally demanding. The lookahead model is, first and foremost, a dynamic program (although simpler than the original dynamic program), and can be solved using Bellman’s equation (but exploiting convexity). © 2012 Warren B. Powell
36
From DP to stochastic programming We are going to start with Bellman’s equation as it is used in the stochastic programming community (e.g. by Shapiro and Ruszczynski): Translation: © 2012 Warren B. Powell
37
From DP to stochastic programming We first modify the notation to reflect that we are solving the lookahead model. »This gives us © 2012 Warren B. Powell
38
From DP to stochastic programming For our next step, we have to introduce the concept of the post-decision state. Our resource vector evolves according to where represents exogenous (stochastic) input at time t’ (when we are solving the lookahead model starting at time t). Now define the post-decision resource state This is the state immediately after a decision is made. The post-decision state is given by © 2012 Warren B. Powell
39
From DP to stochastic programming We can now write Bellman’s equation as We can approximate the value function using Benders cuts © 2012 Warren B. Powell
40
From DP to stochastic programming We can use value functions to step backward through the tree as a way of solving the lookahead model. © 2012 Warren B. Powell
41
From DP to stochastic programming Stochastic programming, in most applications, consists of solving a stochastic lookahead model, using one of two methods: »Direct solution of all decisions over the scenario tree. »Solution of the scenario tree using approximate dynamic programming with Benders cuts. Finding optimal solutions, even using a restricted representation of the outcomes using a scenario tree, can be computationally very demanding. An extensive literature exists designing optimal algorithms (or algorithms with provable bounds) of a scenario-restricted representation. Solving the lookahead model is an approximate policy. An optimal solution to the lookahead model is not an optimal policy. Bounds on the solution do not provide bounds on the performance of the optimal policy. © 2012 Warren B. Powell
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.