Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Reinforcement Learning

Similar presentations


Presentation on theme: "Introduction to Reinforcement Learning"— Presentation transcript:

1 Introduction to Reinforcement Learning
E0397 Lecture Slides by Richard Sutton (With small changes)

2 Reinforcement Learning: Basic Idea
Learn to take correct actions over time by experience Similar to how humans learn: “trial and error” Try an action – “see” what happens “remember” what happens Use this to change choice of action next time you are in the same situation “Converge” to learning correct actions Focus on long term return, not immediate benefit Some action may seem not beneficial for short term, but its long term return will be good.

3 RL in Cloud Computing Why are we talking about RL in this class?
Which problems due you think can be solved by a learning approach? Is focus on long-term benefit appropriate?

4 RL Agent Object is to affect the environment
Environment is stochastic and uncertain Environment action state reward Agent

5 What is Reinforcement Learning?
An approach to Artificial Intelligence Learning from interaction Goal-oriented learning Learning about, from, and while interacting with an external environment Learning what to do—how to map situations to actions—so as to maximize a numerical reward signal

6 Key Features of RL Learner is not told which actions to take
Trial-and-Error search Possibility of delayed reward Sacrifice short-term gains for greater long-term gains The need to explore and exploit Considers the whole problem of a goal-directed agent interacting with an uncertain environment

7 Examples of Reinforcement Learning
Robocup Soccer Teams Stone & Veloso, Reidmiller et al. World’s best player of simulated soccer, 1999; Runner-up 2000 Inventory Management Van Roy, Bertsekas, Lee & Tsitsiklis 10-15% improvement over industry standard methods Dynamic Channel Assignment Singh & Bertsekas, Nie & Haykin World's best assigner of radio channels to mobile telephone calls Elevator Control Crites & Barto (Probably) world's best down-peak elevator controller Many Robots navigation, bi-pedal walking, grasping, switching between skills... TD-Gammon and Jellyfish Tesauro, Dahl World's best backgammon player

8 Supervised Learning Error = (target output – actual output)
Training Info = desired (target) outputs Supervised Learning System Inputs Outputs Error = (target output – actual output) Best examples: Classification/identification systems. E.g. fault classification, WiFi location determination, workload identification E.g. Classification systems: give agent lots of data where data has been already classified. Agent can “train” itself using this knowledge. Training phase is non trivial (Normally) no learning happens in on-line phase

9 Reinforcement Learning
Training Info = evaluations (“rewards” / “penalties”) RL System Inputs Outputs (“actions”) Objective: get as much reward as possible Absolutely no “already existing training data” Agent learns ONLY by experience

10 Elements of RL State: state of the system
Actions: Suggested by RL agent, taken by actuators in the system Actions change state (deterministically, or stochastically) Reward: Immediate gain when taking action a in state s Value: Long term benefit of taking action a in state s Rewards gained over a long time horizon

11 The n-Armed Bandit Problem
Choose repeatedly from one of n actions; each choice is called a play After each play , you get a reward , where These are unknown action values Distribution of depends only on Objective is to maximize the reward in the long term, e.g., over 1000 plays To solve the n-armed bandit problem, you must explore a variety of actions and exploit the best of them

12 The Exploration/Exploitation Dilemma
Suppose you form estimates The greedy action at t is at You can’t exploit all the time; you can’t explore all the time You can never stop exploring; but you should always reduce exploring. Maybe. action value estimates

13 Action-Value Methods Methods that adapt action-value estimates and nothing else, e.g.: suppose by the t-th play, action had been chosen times, producing rewards then “sample average”

14 e-Greedy Action Selection
{ . . . the simplest way to balance exploration and exploitation

15 10-Armed Testbed n = 10 possible actions
Each is chosen randomly from a normal distrib.: each is also normal: 1000 plays repeat the whole thing 2000 times and average the results

16 e-Greedy Methods on the 10-Armed Testbed

17 Incremental Implementation
Recall the sample average estimation method: The average of the first k rewards is (dropping the dependence on ): Can we do this incrementally (without storing all the rewards)? We could keep a running sum and count, or, equivalently: This is a common form for update rules: NewEstimate = OldEstimate + StepSize[Target – OldEstimate]

18 Tracking a Nonstationary Problem
Choosing to be a sample average is appropriate in a stationary problem, i.e., when none of the change over time, But not in a nonstationary problem. Better in the nonstationary case is: exponential, recency-weighted average

19 Formal Description of RL

20 The Agent-Environment Interface
. . . s a r t +1 t +2 t +3

21 The Agent Learns a Policy
Reinforcement learning methods specify how the agent changes its policy as a result of experience. Roughly, the agent’s goal is to get as much reward as it can over the long run.

22 Getting the Degree of Abstraction Right
Time steps need not refer to fixed intervals of real time. Actions can be low level (e.g., voltages to motors), or high level (e.g., accept a job offer), “mental” (e.g., shift in focus of attention), etc. States can low-level “sensations”, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being “surprised” or “lost”). An RL agent is not like a whole animal or robot. Reward computation is in the agent’s environment because the agent cannot change it arbitrarily. The environment is not necessarily unknown to the agent, only incompletely controllable.

23 Goals and Rewards Is a scalar reward signal an adequate notion of a goal?—maybe not, but it is surprisingly flexible. A goal should specify what we want to achieve, not how we want to achieve it. A goal must be outside the agent’s direct control—thus outside the agent. The agent must be able to measure success: explicitly; frequently during its lifespan.

24 Returns Episodic tasks: interaction breaks naturally into episodes, e.g., plays of a game, trips through a maze. where T is a final time step at which a terminal state is reached, ending an episode.

25 Returns for Continuing Tasks
Continuing tasks: interaction does not have natural episodes. Discounted return:

26 An Example Avoid failure: the pole falling beyond
a critical angle or the cart hitting end of track. As an episodic task where episode ends upon failure: As a continuing task with discounted return: In either case, return is maximized by avoiding failure for as long as possible.

27 A Unified Notation In episodic tasks, we number the time steps of each episode starting from zero. We usually do not have to distinguish between episodes, so we write instead of for the state at step t of episode j. Think of each episode as ending in an absorbing state that always produces reward of zero: We can cover all cases by writing

28 The Markov Property “the state” at step t, means whatever information is available to the agent at step t about its environment. The state can include immediate “sensations,” highly processed sensations, and structures built up over time from sequences of sensations. Ideally, a state should summarize past sensations so as to retain all “essential” information, i.e., it should have the Markov Property:

29 Markov Decision Processes
If a reinforcement learning task has the Markov Property, it is basically a Markov Decision Process (MDP). If state and action sets are finite, it is a finite MDP. To define a finite MDP, you need to give: state and action sets one-step “dynamics” defined by transition probabilities: Expected values of reward:

30 An Example Finite MDP Recycling Robot
At each step, robot has to decide whether it should (1) actively search for a can, (2) wait for someone to bring it a can, or (3) go to home base and recharge. Searching is better but runs down the battery; if runs out of power while searching, has to be rescued (which is bad). Decisions made on basis of current energy level: high, low. Reward = number of cans collected

31 Recycling Robot MDP

32 Value Functions The value of a state is the expected return starting from that state; depends on the agent’s policy: The value of taking an action in a state under policy p is the expected return starting from that state, taking that action, and thereafter following p :

33 Bellman Equation for a Policy p
The basic idea: So: Or, without the expectation operator:

34 More on the Bellman Equation
This is a set of equations (in fact, linear), one for each state. The value function for p is its unique solution. Backup diagrams:

35 Optimal Value Functions
For finite MDPs, policies can be partially ordered: There are always one or more policies that are better than or equal to all the others. These are the optimal policies. We denote them all p *. Optimal policies share the same optimal state-value function: Optimal policies also share the same optimal action-value function: This is the expected return for taking action a in state s and thereafter following an optimal policy.

36 Bellman Optimality Equation for V*
The value of a state under an optimal policy must equal the expected return for the best action from that state: The relevant backup diagram: is the unique solution of this system of nonlinear equations.

37 Bellman Optimality Equation for Q*
The relevant backup diagram: is the unique solution of this system of nonlinear equations.

38 Gridworld Actions: north, south, east, west; deterministic.
If would take agent off the grid: no move but reward = –1 Other actions produce reward = 0, except actions that move agent out of special states A and B as shown. State-value function for equiprobable random policy; g = 0.9

39 Why Optimal State-Value Functions are Useful
Any policy that is greedy with respect to is an optimal policy. Therefore, given , one-step-ahead search produces the long-term optimal actions. E.g., back to the gridworld: *

40 What About Optimal Action-Value Functions?
Given , the agent does not even have to do a one-step-ahead search:

41 Solving the Bellman Optimality Equation
Finding an optimal policy by solving the Bellman Optimality Equation requires the following: accurate knowledge of environment dynamics; we have enough space and time to do the computation; the Markov Property. How much space and time do we need? polynomial in number of states (via dynamic programming methods; Chapter 4), BUT, number of states is often huge (e.g., backgammon has about 1020 states). We usually have to settle for approximations. Many RL methods can be understood as approximately solving the Bellman Optimality Equation.

42 Solving MDPs The goal in MDP is to find the policy that maximizes long term reward (value) If all the information is there, this can be done by explicitly solving, or by iterative methods

43 Policy Iteration policy evaluation policy improvement “greedification”

44 Policy Iteration

45 From MDPs to reinforcement learning

46 MDPs vs RL MDPs are great if we know transition probabilities
And expected immediate rewards But in real life tasks – this is precisely what we do NOT know! This is what turns the task into a learning problem Simplest method to learn: Monte Carlo

47 Monte Carlo Methods Monte Carlo methods learn from complete sample returns Only defined for episodic tasks Monte Carlo methods learn directly from experience On-line: No model necessary and still attains optimality Simulated: No need for a full model

48 Monte Carlo Policy Evaluation
Goal: learn Vp(s) Given: some number of episodes under p which contain s Idea: Average returns observed after visits to s 1 2 3 4 5 Every-Visit MC: average returns for every time s is visited in an episode First-visit MC: average returns only for first time s is visited in an episode Both converge asymptotically

49 First-visit Monte Carlo policy evaluation

50 Monte Carlo Estimation of Action Values (Q)
Monte Carlo is most useful when a model is not available We want to learn Q* Qp(s,a) - average return starting from state s and action a following p Also converges asymptotically if every state-action pair is visited Exploring starts: Every state-action pair has a non-zero probability of being the starting pair

51 Monte Carlo Control MC policy iteration: Policy evaluation using MC methods followed by policy improvement Policy improvement step: greedify with respect to value (or action-value) function

52 Monte Carlo Exploring Starts
Fixed point is optimal policy p* Now proven (almost) Problem with Monte Carlo – learning happens only after end of entire episode

53 TD Prediction Policy Evaluation (the prediction problem):
for a given policy p, compute the state-value function Recall: target: the actual return after time t target: an estimate of the return

54 Simple Monte Carlo T T T T T T T T T T T

55 Simplest TD Method T T T T T T T T T T T

56 cf. Dynamic Programming
T T T T T T T T T T T T T

57 TD methods bootstrap and sample
Bootstrapping: update involves an estimate MC does not bootstrap DP bootstraps TD bootstraps Sampling: update does not involve an expected value MC samples DP does not sample TD samples

58 Example: Driving Home (5) (15) (10) (10) (3)

59 Driving Home Changes recommended by Monte Carlo methods (a=1)
by TD methods (a=1)

60 Advantages of TD Learning
TD methods do not require a model of the environment, only experience TD, but not MC, methods can be fully incremental You can learn before knowing the final outcome Less memory Less peak computation You can learn without the final outcome From incomplete sequences Both MC and TD converge (under certain assumptions to be detailed later), but which is faster?

61 Random Walk Example Values learned by TD(0) after
various numbers of episodes

62 TD and MC on the Random Walk
Data averaged over 100 sequences of episodes

63 Optimality of TD(0) Batch Updating: train completely on a finite amount of data, e.g., train repeatedly on 10 episodes until convergence. Compute updates according to TD(0), but only update estimates after each complete pass through the data. For any finite Markov prediction task, under batch updating, TD(0) converges for sufficiently small a. Constant-a MC also converges under these conditions, but to a difference answer!

64 Random Walk under Batch Updating
After each new episode, all previous episodes were treated as a batch, and algorithm was trained until convergence. All repeated 100 times.

65 You are the Predictor Suppose you observe the following 8 episodes:
A, 0, B, 0 B, 1 B, 0

66 You are the Predictor

67 You are the Predictor The prediction that best matches the training data is V(A)=0 This minimizes the mean-square-error on the training set This is what a batch Monte Carlo method gets If we consider the sequentiality of the problem, then we would set V(A)=.75 This is correct for the maximum likelihood estimate of a Markov model generating the data i.e, if we do a best fit Markov model, and assume it is exactly correct, and then compute what it predicts (how?) This is called the certainty-equivalence estimate This is what TD(0) gets

68 Learning An Action-Value Function

69 Sarsa: On-Policy TD Control
Turn this into a control method by always updating the policy to be greedy with respect to the current estimate:

70 Windy Gridworld undiscounted, episodic, reward = –1 until goal

71 Results of Sarsa on the Windy Gridworld

72 Q-Learning: Off-Policy TD Control

73 Planning and Learning Use of environment models
Integration of planning and learning methods

74 The Original Idea Sutton, 1990

75 The Original Idea Sutton, 1990

76 Models Model: anything the agent can use to predict how the environment will respond to its actions Distribution model: description of all possibilities and their probabilities e.g., Sample model: produces sample experiences e.g., a simulation model Both types of models can be used to produce simulated experience Often sample models are much easier to come by

77 Planning Planning: any computational process that uses a model to create or improve a policy Planning in AI: state-space planning plan-space planning (e.g., partial-order planner) We take the following (unusual) view: all state-space planning methods involve computing value functions, either explicitly or implicitly they all apply backups to simulated experience

78 Planning Cont. Classical DP methods are state-space planning methods
Heuristic search methods are state-space planning methods A planning method based on Q-learning: Random-Sample One-Step Tabular Q-Planning

79 Learning, Planning, and Acting
Two uses of real experience: model learning: to improve the model direct RL: to directly improve the value function and policy Improving value function and/or policy via a model is sometimes called indirect RL or model-based RL. Here, we call it planning.

80 Direct vs. Indirect RL Indirect methods: make fuller use of experience: get better policy with fewer environment interactions Direct methods simpler not affected by bad models But they are very closely related and can be usefully combined: planning, acting, model learning, and direct RL can occur simultaneously and in parallel

81 The Dyna Architecture (Sutton 1990)

82 The Dyna-Q Algorithm direct RL model learning planning

83 Dyna-Q on a Simple Maze rewards = 0 until goal, when =1

84 Value Prediction with FA
As usual: Policy Evaluation (the prediction problem): for a given policy p, compute the state-value function In earlier chapters, value functions were stored in lookup tables.

85 Adapt Supervised Learning Algorithms
Training Info = desired (target) outputs Supervised Learning System Inputs Outputs Training example = {input, target output} Error = (target output – actual output)

86


Download ppt "Introduction to Reinforcement Learning"

Similar presentations


Ads by Google