Download presentation
Presentation is loading. Please wait.
Published bySherilyn Osborne Modified over 8 years ago
1
Reinforcement Learning RS Sutton and AG Barto Summarized by Joon Shik Kim 12.03.29.(Thu) Computational Models of Intelligence
2
Introduction The idea that we learn by interacting with our environment is probably the first to occur to us when we think about the nature of learning. When an infant plays, waves its arms, or looks about, it has no explicit teacher, but it does have a direct sensorimotor connection to its environment. Exercising this connection produces a wealth of information about cause and effect, about the consequences of actions, and about what to do in order to achieve goals.
3
Elements of Reinforcement Learning (1/2) A policy: the learning agent’s way of behaving at a given time. A mapping from perceived states of the environment to actions to be taken when in those states. A reward function: the goal in a reinforcement learning problem. Each perceived state of the environment is mapped into a single number, a reward, indicating the intrinsic desirability of that state.
4
Elements of Reinforcement Learning (2/2) A value function: specifies what is good in the long run. Roughly speaking, the value of a state is the total amount of reward an agent can expect to accumulate over the future, starting from that state.
5
Update Rule If we let s denote the state before the greedy move, and s’ the state after the move, then the update to the estimated value of s, denoted V(s), can be written as where α is a small positive fraction called the step- size parameter, which influences the rate of learning.
6
Action-Value Methods (1/2) We denote the true (actual) value of action a as Q*(a) and the estimated value at the tth play as Q t (a). Recall that the true value of an action is the mean reward received when that action is selected. If at the tth play action a has been chosen k a times prior to t, yielding rewards r 1, r 2,…,r ka, then its value is estimated to be
7
Action-Value Methods (2/2) As k a →∞, by the law of large number Q t (a) converges to Q*(a). The simplest action selection rule is to select the action (or one of the actions) with highest estimated action value, that is, to select on play t one of the greedy actions, a*, for which Q t (a*)=max a Q t (a*).
8
Incremental Implementation NewEstimate←OldEstimate+StepSize[Target−OldEstimate]
9
Reinforcement Comparison A central intuition underlying reinforcement learning is that actions followed by large rewards should be made more likely to recur, whereas actions followed by small rewards should be made less likely to recur. If an action is taken and the environment returns a reward of 5, is that large or small? To make such a judgment one must compare the reward with some standard or reference level, called the reference reward.
10
Reinforcement Comparison In order to pick among the actions, reinforcement comparison methods maintain a separate measure of their preference for each action. Let us denote the preference for action a on play t by P t (a). The preferences might be used to determine action-selection probabilities according to a softmax relationship, such as
11
Reinforcement Comparison where denotes the probability of selecting action a on the tth play. After each play, the preference for the action selected on that play, a t, is incremented by the difference between the reward, r t, and the reference reward, : where β is a positive step-size parameter.
12
Reinforcement Comparison
13
Q-learning
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.