Download presentation
Presentation is loading. Please wait.
1
Markov Decision Processes: A Survey
Adviser:Yeong-Sung Lin Graduate Student:Cheng-Ta Lee Network Optimization Research Group March 22, 2004 Markov Decision Processes: A Survey
2
Outline Introduction Markov Theory Markov Decision Processes
Conclusion Future Work Markov Decision Processes: A Survey
3
Introduction Decision Theory Probability Theory + Utility Theory =
Describes what an agent should believe based on evidence. Describes what an agent wants. Describes what an agent should do. Markov Decision Processes: A Survey
4
Introduction Markov decision processes (MDPs) theory has developed substantially in the last three decades and become an established topic within many operational research. Modeling of (infinite) sequence of recurring decision problems (general behavioral strategies) MDPs defined Objective functions Policies Markov Decision Processes: A Survey
5
Markov Theory Markov process
A mathematical model that us useful in the study of complex systems. The basic concepts of the Markov process are those “state” of a system and state “transition”. A graphic example of a Markov process is presented by a frog in a lily pond. State transition system Discrete-time process Continuous-time process Markov Decision Processes: A Survey
6
Markov Theory To study the discrete-time process
Suppose that there are N states in the system numbered from 1 to N. If the system is a simple Markov process, then the probability of a transition to state j during the next time interval, given that the system now occupies state i, is a function only of i and j and not of any history of the system before its arrival in i. In other words, we may specify a set of conditional probability pij. where Markov Decision Processes: A Survey
7
The Toymaker Example First state: the toy is great favor.
Second state: the toy is out of favor. Matrix form Transition diagram Markov Decision Processes: A Survey
8
The Toymaker Example , the probability that the system will occupy state i after n transitions. If its state at n=0 is known. It follow that Markov Decision Processes: A Survey
9
The Toymaker Example If the toymaker starts with a successful toy, then and , so that Markov Decision Processes: A Survey
10
The Toymaker Example Table 1.1 Successive State Probabilities of Toymaker Starting with a Successful Toy Table 1.2 Successive State Probabilities of Toymaker Starting without a Successful Toy n= 1 2 3 4 5 … 0.5 0.45 0.445 0.4445 0.55 0.555 0.5555 n= 1 2 3 4 5 … 0.4 0.44 0.444 0.4444 0.6 0.56 0.556 0.5556 Markov Decision Processes: A Survey
11
The Toymaker Example The row vector with components is thus the limit as n approaches infinity of Markov Decision Processes: A Survey
12
z-Transformation For the study of transient behavior and for theoretical convenience, it is useful to study the Markov process from the point of view of the generating function or, as we shall call it, the z-transform. Consider a time function f(n) that takes on arbitrary values f(0), f(1), f(2), and so on, at nonnegative, discrete, integrally spaced points of time and that is zero for negative time. Such a time function is shown in Fig. 2.4 Fig. 2.4 An Arbitrary discrete-time function Markov Decision Processes: A Survey
13
z-Transformation z-transform F(z) such that
Table 1.3. z-Transform Pairs Time Function for n>=0 z-Transform f(n) F(z) f1(n)+f2(n) F1(z)+F2(z) kf(n) (k is a constant) kF(z) f(n-1) zF(z) f(n+1) z-1[F(z)-f(0)] 1 (unit step) n (unit ramp) Markov Decision Processes: A Survey
14
z-Transformation Consider first the step function the z-transform is or For the geometric sequence f(n)=αn,n≧0, or Markov Decision Processes: A Survey
15
z-Transformation We shall now use the z-transform to analyze Markov processes In this expression I is the identity matrix. Markov Decision Processes: A Survey
16
z-Transformation Let us investigate the toymaker’s problem by z-transformation. Let the matrix H(n) be the inverse transform of (I-zP)-1 on an element-by-element basis Markov Decision Processes: A Survey
17
z-Transformation If the toymaker starts in the successful state 1, then π(0)=[1 0] and or , If the toymaker starts in the unsuccessful state 2, then π(0)=[0 1] and or , We have now obtained analytic forms for the data in Table 1.1 and 1.2. Markov Decision Processes: A Survey
18
Laplace Transformation
We shall extend our previous work to the case in which the process may make transitions at random time intervals. The Laplace transform of a time function f(t) which is zero for t<0 is defined by Table 2.4. Laplace Transform Pairs Time Function for t>=0 z-Transform f(t) F(s) f1(t)+f2(t) F1(n)+F2(n) kf(t) (k is a constant) kF(s) sF(s)-f(0) 1 (unit step) t (unit ramp) F(s+a) Markov Decision Processes: A Survey
19
Laplace Transformation
Markov Decision Processes: A Survey
20
Laplace Transformation
We shall now use the Laplace transform to analyze Markov processes. For discrete processes, or Markov Decision Processes: A Survey
21
Laplace Transformation
Recall the toymaker’s initial policy, for which the transition-probability matrix was Markov Decision Processes: A Survey
22
Laplace Transformation
Let the matrix H(t) be the inverse transform (sI-A)-1 Then becomes by means of inverse transformation Markov Decision Processes: A Survey
23
Laplace Transformation
If the toymaker starts in the successful state 1, then π(0)=[1 0] and or , If the toymaker starts in the unsuccessful state 2, then π(0)=[0 1] and or , We have now obtained analytic forms for the data in Table 1.1 and 1.2. Markov Decision Processes: A Survey
24
Markov Decision Processes
MDPs applies dynamic programming to the solution of a stochastic decision with a finite number of states. The transition probabilities between the states are described by a Markov chain. The reward structure of the process is described by a matrix that represents the revenue (or cost) associated with movement from one state to another. Both the transition and revenue matrices depend on the decision alternatives available to the decision maker. The objective of the problem is to determine the optimal policy that maximizes the expected revenue over a finite or infinite number of stages. Markov Decision Processes: A Survey
25
Markov Process with Rewards
Suppose that an N-state Markov process earns rij dollars when it makes a transition from state i to j. We call rij the “reward” associated with the transition from i to j. The rewards need not be in dollars, they could be voltage levels, unit of production, or any other physical quantity relevant to the problem. Let us define vi(n) as the expected total earnings in the next n transitions if the system is now in state i. Markov Decision Processes: A Survey
26
Markov Process with Rewards
Recurrence relation v(n)=q+Pv(n-1) Markov Decision Processes: A Survey
27
The Toymaker Example Table 3.1. Total Expected Reward for Toymaker as a Function of State and Number of Weeks Remaining n= 1 2 3 4 5 … 6 7.5 8.55 9.555 -3 -2.4 -1.44 -0.444 0.5556 Markov Decision Processes: A Survey
28
Toymaker’s problem: total expected reward in each state as a function of week remaining
Markov Decision Processes: A Survey
29
z-Transform Analysis of the Markov Process with Rewards
The z-Transform of the total-value vector v(n) will be called where v(0)=0 Markov Decision Processes: A Survey
30
z-Transform Analysis of the Markov Process with Rewards
The total-value vector v(n) is then F(n)q by inverse transformation of , and, since , Let the matrix F(n) be the inverse transform of We see that, as n becomes very large. Both v1(n) and v2(n) have slope 1 and v1(n)-v2(n)=10. Markov Decision Processes: A Survey
31
Optimization Techniques in General Markov Decision Processes
Value Iteration Exhaustive Enumeration Policy Iteration Linear Programming Lagrangian Relaxation Markov Decision Processes: A Survey
32
Value Iteration Original Advertising? No Yes Research? No Yes
Markov Decision Processes: A Survey
33
Diagram of States and Alternatives
Markov Decision Processes: A Survey
34
The Toymaker’s Problem Solved by Value Iteration
The quantity is the expected reward from a single transition from state i under alternative k. Thus, The alternatives for the toymaker are presented in Table 3.1. State Alternative Transition Probabilities Reward Expected Immediate i k 1 (Successful toy) 1 (No advertising) 0.5 9 3 6 2 (Advertising) 0.8 0.2 4 2 (Unsuccessful toy) 1 (No research) 0.4 0.6 -7 -3 2 (research) 0.7 0.3 1 -19 -5 Markov Decision Processes: A Survey
35
The Toymaker’s Problem Solved by Value Iteration
We call di(n) the “decision” in state i at the nth stage. When di(n) has been specified for all i and all n, a “policy” has been determined. The optimal policy is the one that maximizes total expected return for each i and n. To analyze this problem. Let us redefine as the total expected return in n stages starting from state i if an optimal policy is followed. It follows that for any n “Principle of optimality” of dynamic programming: in an optimal sequence of decisions or choices, each subsequence must also be optimal. Markov Decision Processes: A Survey
36
The Toymaker’s Problem Solved by Value Iteration
Table 3.6 Toymaker’s Problem Solved by Value Iteration n= 1 2 3 4 … 6 8.2 10.22 12.222 -3 -1.7 0.23 2.223 - Markov Decision Processes: A Survey
37
The Toymaker’s Problem Solved by Value Iteration
Note that for n=2, 3, and 4, the second alternative in each state is to be preferred. This means that the toymaker is better advised to advertise and to carry on research in spite of the costs of these activities. For this problem the convergence seems to have taken place at n=2, and the second alternative in each state has been chosen. However, in many problems it is difficult to tell when convergence has been obtained. Markov Decision Processes: A Survey
38
Evaluation of the Value-Iteration Approach
Even though the value-iteration method is not particularly suited to long-duration processes. Markov Decision Processes: A Survey
39
Exhaustive Enumeration
The methods for solving the infinite-stage problem. The method calls for evaluating all possible stationary policies of the decision problem. This is equivalent to an exhaustive enumeration process and can be used only if the number of stationary policies is reasonably small. Markov Decision Processes: A Survey
40
Exhaustive Enumeration
Suppose that the decision problem has S stationary policies, and assume that Ps and Rs are the (one-step) transition and revenue matrices associated with the policy, s=1, 2, …, S. Markov Decision Processes: A Survey
41
Exhaustive Enumeration
The steps of the exhaustive enumeration method are as follows. Step 1. Compute vsi, the expected one-step (one-period) revenue of policy s given state i, i=1, 2, …, m. Step 2. Compute πsi, the long-run stationary probabilities of the transition matrix Ps associated with policy s. These probabilities, when they exist, are computed from the equations πs Ps =πs πs1 +πs2 +…+πsm =1 where πs =(πs1 , πs2 , …, πsm ). Step 3. Determine Es, the expected revenue of policy s per transition step (period), by using the formula Step 4. The optimal policy s* id determined such that Markov Decision Processes: A Survey
42
Exhaustive Enumeration
We illustrate the method by solving the gardener problem for an infinite-period planning horizon. The gardener problem has a total of eight stationary policies, as the following table shows: Stationary policy, s Action 1 Do not fertilize at all. 2 Fertilize regardless of the state. 3 Fertilize if in state 1. 4 Fertilize if in state 2. 5 Fertilize if in state 3. 6 Fertilize if in state 1 or 2. 7 Fertilize if in state 1 or 3. 8 Fertilize if in state 2 or 3. Markov Decision Processes: A Survey
43
Exhaustive Enumeration
The matrices Ps and Rs for policies 3 through 8 are derived from those of policies 1 and 2 and are given as Markov Decision Processes: A Survey
44
Exhaustive Enumeration
Step1: The values of vsi can thus be computed as given in the following table. s i=1 i=2 i=3 1 5.3 3 -1 2 4.7 3.1 0.4 4 5 6 7 8 Markov Decision Processes: A Survey
45
Exhaustive Enumeration
Step 2: The computations of the stationary probabilities are achieved by using the equations πs Ps =πs πs1 +πs2 +…+πsm =1 As an illustration, consider s=2. The associated equations are The solution yields In this case, the expected yearly revenue is Markov Decision Processes: A Survey
46
Exhaustive Enumeration
Step 3&4: The following table summarizes πs and Es for all the stationary policies. Policy 2 yields the largest expected yearly revenue. The optimum long-range policy calls for applying fertilizer regardless of the system. S 1 -1 2 6/59 31/59 22/59 3 0.4 4 5 5/154 69/154 80/154 1.724 6 7 5/137 62/167 70/137 1.734 8 12/135 69/135 54/135 2.216 2.256= Markov Decision Processes: A Survey
47
Policy Iteration The system is completely ergodic, the limiting state probabilities πi are independent of the starting state, and the gain g of the system is where qi is the expected immediate return in state i defined by Markov Decision Processes: A Survey
48
Policy Iteration A possible five-state problem.
The alternative thus selected is called the “decision” for that state; it is no longer a function of n. The set of X’s or the set of decisions for all states is called a “policy”. Markov Decision Processes: A Survey
49
Policy Iteration It is possible to describe the policy by a decision vector d whose elements represent the number of the alternative selected in each state. In this case An optimal policy is defined as a policy that maximizes the gain, or average return per transition. Markov Decision Processes: A Survey
50
Policy Iteration In five-state problem diagrammed, there are different policies. However feasible this may be for 120 policies, it becomes unfeasible for very large problem. For example, a problem with 50 states and 50 alternatives in each state contains 5050(≒1085) policies. The policy-iteration method that will be described will find the optimal policy in a small number of iterations. It is composed of two parts, the value-determination operation and the policy-improvement routine. Markov Decision Processes: A Survey
51
Policy Iteration The Iteration Cycle
Markov Decision Processes: A Survey
52
The Toymaker’s Problem
Let us suppose that we have no a priori knowledge about which policy is best. Then if we set v1=v2=0 and enter the policy-improvement routine. It will select as an initial policy the one that maximizes expected immediate reward in each state. For the toymaker, this policy consists of selection of alternative 1 in both state 1 and 2. For this policy Markov Decision Processes: A Survey
53
The Toymaker’s Problem
We are now ready to begin the value-determination operation that will evaluate our initial policy. Setting v2=0 and solving these equation, we obtain We are now ready to enter the policy-improvement routing as shown in Table 3.8 State Alternative Test Quantity i k 1 2 6+0.5(10)+0.5(0)=11 4+0.8(10)+0.2(0)=12 -3+0.4(10)+0.6(0)=1 -5+0.7(10)+0.3(0)=2 Markov Decision Processes: A Survey
54
The Toymaker’s Problem
The policy-improvement routine reveals that the second alternative in each state produces a higher value of the test quantity than does the first alternative. For this policy, We are now ready to the value-determination operation that will evaluate our policy. With v2=0, the results of the value-determination operation are The gain of the policy is thus twice that of the original policy, we have found the optimal policy. For the optimal policy, v1=10, v2=0, so that v1-v2=10. This means that, even when the toymaker is following the optimal policy by using advertising and research. Markov Decision Processes: A Survey
55
Linear Programming The infinite-stage Markov decision problems, can be formulated and solved as linear programs. We have defined the policy of MDP and can be defined by Each state has k decisions, so D can be characterized by assigning values in the matrix, , where each row must contain a single 1with the rest of the elements zero. When an element =1, it can be interpreted as calling for decision k when the system is in state i . Markov Decision Processes: A Survey
56
Linear Programming When we use linear programming to solve the MDP problem, we will define the formulation as The linear programming formulation is best expressed in terms of a variable , which is related to as follows. Let be the unconditional probability that the system is in state i and decision k is made; that is, From the rules of conditional probability, Furthermore, So that Markov Decision Processes: A Survey
57
Linear Programming There exist several constraints on , so that
.from the results on steady-state probabilities, , so that Markov Decision Processes: A Survey
58
Linear Programming The long run expected average revenue per unit time is given by , hence the problem to choose the that , subject to the constrains. This is clearly a linear programming problem that can be solved by the simplex method. Once the is obtained, the Markov Decision Processes: A Survey
59
Linear Programming The following is an LP formulation of the gardener problem without discounting: Maximize E=5.3w11+4.7w12+3w21+3.1w22-w31+0.4w32 subject to w11 + w12 - (0.2w w w w32) = 0 w21 + w22 - (0.5w w w w w32) = 0 w31 + w32 - (0.3w w w w22 + w w32) = w11 + w12 + w21 + w22 + w31 + w32 = 1 wik>=0, for all I and k The optimal solution is w11 = w21 = w31 = 0 and w12 = , w22 = , and w32 = This result mean that d12=d22=d32=1. Thus, the optimal policy selects alternative k=2 for i=1, 2, and 3. The optimal values of E is 4.7(0.1017)+3.1(0.5254)+0.4(0.3729)=2.256. Markov Decision Processes: A Survey
60
Largrangian Relexation
If the linear programming method can not find the optimal solution with the additional constraints . we can use Lagrangian relaxation to bind the constraints to the object function, and then solve this new sub problem without the additional constraints added . By adjusting the multiplier of Lagrangian relaxation, we can get the upper bound and the lower bound of this problem. We will use the multiplier of Lagrangian relaxation to rearrange the revenue of Markovian decision process, and then do the original Markovian. Decision Process model to find the optimal policy . Markov Decision Processes: A Survey
61
Comparison Methods Calculates simply large problem Optimal policy
Characteristic Methods Calculates simply large problem Optimal policy Additional constraints Value Iteration Exhaustive Enumeration Policy Iteration Linear Programming Lagrangian Relaxation Markov Decision Processes: A Survey
62
Semi-Markov Decision Processes
So far we have assumed that decisions are taken at each of a sequence of unit time intervals. We will allow decisions to be taken at varying integral multiples of the unit time interval. The interval between decisions may be predetermined or random. Markov Decision Processes: A Survey
63
Partially Observable MDPs
MDPs assume complete observable (can always tell what state you’re in). We can’t always be certain of the current state. POMDPs are more difficult to solve than MDPs Most real-world problems are POMDPs Markov Decision Processes: A Survey
64
Applications on MDPs Capacity Expansion Decision Analysis
Network Control Queueing System Control Markov Decision Processes: A Survey
65
Conclusion MDPs provide and elegant formal framework for sequential decision making. We present a powerful tool for formulating models and finding the optimal policies. Five algorithm were presented Value Iteration Exhaustive Enumeration (optimal policy) Policy Iteration (optimal policy) Linear Programming (optimal policy) Lagrangian Relaxation (optimal policy) Markov Decision Processes: A Survey
66
Future Work Sensor Networks
Maximize system lifetime of sensor networks Maximize cover the area of sensor networks Minimize response time of sensor networks Markov Decision Processes: A Survey
67
References Hamdy A. Taha, “Operations Research: an Introduction,” third edition, 1982. Hillier and Lieberman,”Introduction to Operations Research,” fourth edition, Holden-Day, Inc, 1986. R. K. Ahuja, T. L. Magnanti, and J. B. Orlin, “Network Flows,” Prentice-Hall, 1993. Leslie Pack Kaelbling, “Techniques in Artificial Intelligence: Markov Decision Processes,” MIT OpenCourseWare, Fall 2002. Ronald A. Howard, “Dynamic Programming and Markov Processes,” Wiley, New York, 1970. D. J. White, “Markov Decision Processes,” Wiley, 1993. Dean L. Isaacson and Richard W. Madsen, “Markov Chains Theory and Applications,” Wiley, 1976 M. H. A. Davis “Markov Models and Optimization,” Chapman & Hall, 1993. Martin L. Puterman, “Markov Decision Processes: Discrete Stochastic Dynamic Programming,” Wiley, New York, 1994. Hsu-Kuan Hung, Adviser:Yeong-Sung Lin ,“Optimization of GPRS Time Slot Allocation”, June, 2001. Hui-Ting Chuang, Adviser:Yeong-Sung Lin ,“Optimization of GPRS Time Slot Allocation Considering Call Blocking Probability Constraints”, June, 2002. Markov Decision Processes: A Survey
68
References 高孔廉,「作業研究--管理決策之數量方法」,三民總經銷,民國74年四版。
李朝賢,「作業研究概論」,弘業文化實業股份有限公司出版,民國66年8月。 楊超然,「作業研究」,三民書局出版,民國66年9月初版。 葉若春,「作業研究」,中興管理顧問公司出版,民國86年8月五版。 薄喬萍,「作業研究決策分析」,復文書局發行,民國78年6月初版。 葉若春,「線性規劃理論與應用」,民國73年9月增定十版。 Leonard Kleinrock, “Queueing Systems Volume I: Threory,” Wiley, New York, 1975. Chiu, Hsien-Ming, “Lagrangian Relaxation,” Tamkang University, Fall 2003. L. Cheng, E. Subrahmanian, A. W. Westerberg, “Design and planning under uncertainty: issues on problem formulation and solution”, Computers and Chemical Engineering, 27, 2003, pp Regis Sabbadin, “Possibilistic Markov Decision Processes”, Engineering Application of Artificial Intelligence, 14, 2001, pp K. Karen Yin, Hu Liu, Neil E. Johnson, “Markovian Inventory Policy with Application to the Paper Industry”, Computers and Chemical Engineering, 26, 2002, pp Markov Decision Processes: A Survey
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.