Presentation is loading. Please wait.

Presentation is loading. Please wait.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.

Similar presentations


Presentation on theme: "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution."— Presentation transcript:

1 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution methods for MDPs known as dynamic programming (DP) pShow how DP can be used to compute value functions, and hence, optimal policies pDiscuss efficiency and utility of DP Objectives of this chapter:

2 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2 Policy Evaluation Policy Evaluation: for a given policy , compute the state-value function Recall:

3 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3 Iterative Methods a “sweep” A sweep consists of applying a backup operation to each state. A full policy-evaluation backup:

4 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4 Iterative Policy Evaluation

5 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5 A Small Gridworld pAn undiscounted episodic task pNonterminal states: 1, 2,..., 14; pOne terminal state (shown twice as shaded squares) pActions that would take agent off the grid leave state unchanged pReward is –1 until the terminal state is reached

6 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6 Iterative Policy Eval for the Small Gridworld

7 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7 Policy Improvement Suppose we have computed for a deterministic policy . For a given state s, would it be better to do an action ?

8 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8 Policy Improvement Cont.

9 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9 Policy Improvement Cont.

10 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10 Policy Iteration policy evaluationpolicy improvement “greedification”

11 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11 Policy Iteration

12 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12 Jack’s Car Rental p$10 for each car rented (must be available when request rec’d) pTwo locations, maximum of 20 cars at each pCars returned and requested randomly Poisson distribution, n returns/requests with prob 1st location: average requests = 3, average returns = 3 2nd location: average requests = 4, average returns = 2 pCan move up to 5 cars between locations overnight pStates, Actions, Rewards? pTransition probabilities?

13 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13 Jack’s Car Rental

14 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14 Jack’s CR Exercise pSuppose the first car moved is free From 1st to 2nd location Because an employee travels that way anyway (by bus) pSuppose only 10 cars can be parked for free at each location More than 10 cost $4 for using an extra parking lot pSuch arbitrary nonlinearities are common in real problems

15 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15 Value Iteration Recall the full policy-evaluation backup: Here is the full value-iteration backup:

16 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16 Value Iteration Cont.

17 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17 Gambler’s Problem pGambler can repeatedly bet $ on a coin flip pHeads he wins his stake, tails he loses it pInitial capital  {$1, $2, … $99} pGambler wins if his capital becomes $100 loses if it becomes $0 pCoin is unfair Heads (gambler wins) with probability p =.4 pStates, Actions, Rewards?

18 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18 Gambler’s Problem Solution

19 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19 Herd Management pYou are a consultant to a farmer managing a herd of cows pHerd consists of 5 kinds of cows: Young Milking Breeding Old Sick pNumber of each kind is the State pNumber sold of each kind is the Action pCows transition from one kind to another pYoung cows can be born

20 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20 Asynchronous DP pAll the DP methods described so far require exhaustive sweeps of the entire state set. pAsynchronous DP does not use sweeps. Instead it works like this: Repeat until convergence criterion is met: –Pick a state at random and apply the appropriate backup pStill need lots of computation, but does not get locked into hopelessly long sweeps pCan you select states to backup intelligently? YES: an agent’s experience can act as a guide.

21 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21 Generalized Policy Iteration Generalized Policy Iteration (GPI): any interaction of policy evaluation and policy improvement, independent of their granularity. A geometric metaphor for convergence of GPI:

22 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 22 Efficiency of DP pTo find an optimal policy is polynomial in the number of states… pBUT, the number of states is often astronomical, e.g., often growing exponentially with the number of state variables (what Bellman called “the curse of dimensionality”). pIn practice, classical DP can be applied to problems with a few millions of states. pAsynchronous DP can be applied to larger problems, and appropriate for parallel computation. pIt is surprisingly easy to come up with MDPs for which DP methods are not practical.

23 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 23 Summary pPolicy evaluation: backups without a max pPolicy improvement: form a greedy policy, if only locally pPolicy iteration: alternate the above two processes pValue iteration: backups with a max pFull backups (to be contrasted later with sample backups) pGeneralized Policy Iteration (GPI) pAsynchronous DP: a way to avoid exhaustive sweeps pBootstrapping: updating estimates based on other estimates


Download ppt "R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution."

Similar presentations


Ads by Google