Presentation is loading. Please wait.

Presentation is loading. Please wait.

REINFORCEMENT LEARNING Unsupervised learning 1. 2 So far ….  Supervised machine learning: given a set of annotated istances and a set of categories,

Similar presentations


Presentation on theme: "REINFORCEMENT LEARNING Unsupervised learning 1. 2 So far ….  Supervised machine learning: given a set of annotated istances and a set of categories,"— Presentation transcript:

1 REINFORCEMENT LEARNING Unsupervised learning 1

2 2 So far ….  Supervised machine learning: given a set of annotated istances and a set of categories, find a model to automatically categorize unseen istances  Unsupervided learning: no examples are available  Three types of unsupervised learning algorithms:  Rule learning – find regularities in data  clustering- Given a set of instances described by feature vectors, group them in clusters such as intra-cluster similarity is maximized and inter- cluster dissimilarity is maximized  Reinforcement learning (today)

3 Reinforcement learning  Reinforcement learning:  Agent receives no examples and starts with no model of the environment.  Agent gets feedback through rewards, or reinforcement. 3

4 Learning by reinforcement  Examples:  Learning to play Backgammon  Robot learning to move in an environment  Characteristics:  No direct training examples – (possibly delayed) rewards instead  Need for exploration of environment & exploitation  The environment might be stochastic and/or unknown  The actions of the learner affect future rewards

5 Example: Robot moving in an environment (a maze) 5

6 Example: playing a game (e.g. backgammon, chess) 6

7 Reinforcement Learning Supervised Learning Input is an istance, output is a classification of the istance Input is some “goal”, output is a sequence of actions to meet the goal

8 Reinforcement learning

9 Elements of RL  Transition model, how actions A influence states S  Reward R, immediate value of state-action transition  Policy  S  A, maps states to actions Agent Environment StateRewardAction Policy In RL, the “model” to be learned is a policy to meet “at best” some given goal (e.g. win a game)

10 Markov Decision Process (MDP)  MDP is a formal model of the RL problem  At each discrete time point  Agent observes state s t and chooses action a t (according to some probability distribution)  Receives reward r t from the environment and the state changes to s t+1  Markov assumption: r t =r(s t,a t ) s t+1 =  (s t,a t ) i.e. r t and s t+1 depend only on the current state and action  In general, the functions r and  may not be deterministic (i.e. they are stochastic) and are not necessarily known to the agent

11 Agent’s Learning Task Execute actions in environment, observe results and  Learn action policy that maximises expected cumulative reward ER from any starting state s 0 in S. Here is the discount factor for future rewards, r k is the reward at time k.  Note: Target function is There are no training examples of the form (s,a) but only of the form ((s,a),r)

12 Example: TD-Gammon  Immediate reward: +100 if win -100 if lose 0 for all other states  Trained by playing 1.5 million games against itself  Now approximately equal to the best human player

13 Example: Mountain-Car  States: position and velocity  Actions: accelerate forward, accelerate backward, coast  Rewards  Reward=-1for every step, until the car reaches the top  Reward=1 at the top, 0 otherwise,  <1  The possible reward will be maximised by minimising the number of steps to the top of the hill

14 Value function We will consider deterministic world first In a deterministic environment, rewards are known and the environment is known  Given a policy π (adopted by the agent), define an evaluation function over states: Remember: a policy is a mapping from states to actions, target is finding optimal policy V π (s t ) is the value of being in state s t according to policy π

15 Example: robot in a grid environment r( (2,3), UP)=100 Grid world environment Six possible states Arrows represent possible actions G: goal state Actions: UP, DOWN, LEFT, RIGHT Known environment: grid with cells Deterministic: possible moves in any cell are known, rewards are known Known environment: grid with cells Deterministic: possible moves in any cell are known, rewards are known r(state, action) immediate reward values 100 0 0 G 0 0 0 0 0 0 0 0 0 1 2 3 1212

16 Example: robot in a grid environment  Value function: maps states to state values, according to some policy π. How is V estimated? V * (state) values r(state, action) immediate reward values 100 0 0 G 0 0 0 0 0 0 0 0 0 G 901000 8190100 G 901000 8190100 G 901000 8190100 The “value” of being in (1,2) according to some known π is 100

17 Computing V(s) (if policy π is given) Suppose the following is an Optimal policy – denoted with  * :  * tells us what is the best thing to do when in each state. According to  *, we can compute the values of the states for this policy – denoted V  *, or V *

18 r(s,a) (immediate reward) values: V * (s) values, with  =0.9 one optimal policy: V * (s 6 ) = 100 + 0.9*0 = 100 V * (s 5 ) = 0 + 0.9*100 = 90 V * (s 4 ) = 0 + 0.9*90 = 81 Etc. However,  * is usually unknown and the task is to learn the optimal policy s4 s5 s6 **

19 How do we learn optimal policy  * ?  Target function is  : state  action  However…  We have no training examples of the form  We only know:, reward>  Reward might not be known for all states!

20 Utility-based agents  Try to learn V  * (abbreviated V*)  Perform look ahead search to choose best action from any state s  Works well if agent knows   : state  action  state  r : state  action  R  When agent doesn’t know  and r, cannot choose actions this way

21 Q-learning: an utility-based learner in deterministic environments  Q-values  Define a new function Q very similar to V*  If agent learns Q, it can choose optimal action even without knowing  or R  Using Q: What’s new?? Apparently again depends on (unknown)  and R

22 Learning the Q-value  Note: Q and V* closely related  Allows us to write Q recursively as:  It reads: the Q of being in state s and performing action a is equal to the reward r(s,a) plus the maximum Q value of states that can be reached from s(t+1) performing any allowed action a’ (NOTE δ(s(t),a)=s(t+1) is the transition function)  This policy is also called Temporal Difference learning

23 Q-learning (pseudo-code)  Given : State diagram with a goal state (represented by matrix R)  Find : Minimum path from any initial state to the goal state (represented by matrix Q) 1. Set parameter γ, environment reward matrix R, goal state 2. Initialize matrix Q as zero matrix 3. For each episode: ・ Select random initial state ・ Do while not reach goal state ・ Randomly select one among all possible actions for the current state ・ Using this possible action, consider to go to the next state δ(s(t),a)=s(t+1) ・ Get maximum Q value of this next state based on all possible actions and current values of Q matrix ・ compute: ・ update Q(state,action) value in Q matrix ・ Set the next state as the current state End Do End For

24 Algorithm to utilize the Q table Input: Q matrix, initial state 1. Set current state = initial state, randomly select a possible action and compute next state 2. From next state, find action that produce maximum Q value, update Q 3. Set current state = next state 4. Go to 2 until current state = goal state The algorithm above will return sequence of current state from initial state until goal state. Comments: Parameter γ has range value of 0 to 1( ). If γ is closer to zero, the agent will tend to consider only immediate reward. If γ is closer to one, the agent will consider future reward with greater weight, willing to delay the reward.

25 Q-Learning: Example (  =0.9) Episode 1 s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSWE s1s1 0000 s2s2 0000 s3s3 0000 s4s4 0000 s5s5 0000 s6s6 0000 1 1 0 0 00 0 Set current state = initial state, randomly select a possible action and compute next state From current state, find action that produce maximum Q value Update s=s 4 possible moves: North, East Select (at random) to move to s5 actions The “max Q” function search for the “most promising action” from s’ But both in s5: therefore: Q-values table

26 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0000 s3s3 0000 s4s4 0000 s5s5 000 0 s6s6 0000 1 1 0 0 00 0 S 4  s 5 randomly select s 6 as next move From s 5, moving to s 6 again does not allow rewards (all Q(s6,a)=0) Update Q(s5,E)  0+argmax(Q(s6,a’))=0+ +0,9xQ(s6,N))=0+0,9x0

27 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0000 s3s3 0000 s4s4 0000 s5s5 0000 s6s6 000 1 1 0 0 00 0 S 4  s 5  s6 randomly select s 3 as next move GOAL STATE REACHED: END OF FIRST EPISODE In s3 r=1 so Q is updated to 1 1+0.9*0 =1

28 Q-Learning: Example (  =0.9) Episode 2 s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0000 s3s3 0000 s4s4 0 000 s5s5 0000 s6s6 1000 1 1 0 0 00 0 s=s 4, s’=s 1

29 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 000 0 s2s2 0000 s3s3 0000 s4s4 0000 s5s5 0000 s6s6 1000 1 1 0 0 00 0 S 4  s 1, choose s’=s 2

30 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 000 s3s3 0000 s4s4 0000 s5s5 0000 s6s6 1000 1 1 0 0 00 0 S 4  s1  s 2, choose s’=s 3 GOAL STATE REACHED: END OF 2nd EPISODE 1

31 Q-Learning: Example (  =0.9) Episode 3 s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001 s3s3 0000 s4s4 0000+0 s5s5 0000 s6s6 1000 1 1 0 0 00 0 s=s 4, s’=s 5

32 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001 s3s3 0000 s4s4 0000 s5s5 000 s6s6 1000 1 1 0 0 00 0 s=s 5, s’=s 2 0,9

33 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001+0 s3s3 0000 s4s4 0000 s5s5 0.9000 s6s6 1000 1 1 0 0 00 0 s=s 2, s’=s 3 GOAL STATE REACHED: END OF 3rd EPISODE

34 Q-Learning: Example (  =0.9) Episode 4 s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001 s3s3 0000 s4s4 000 0.81 s5s5 0.9000 s6s6 1000 1 1 0 0 00 0 s=s 4, s’=s 5

35 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001 s3s3 0000 s4s4 0000.81 s5s5 0,9000 s6s6 1000 1 1 0 0 00 0 s=s 5, s’=s 2

36 Q-Learning: Esempio (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001+0 s3s3 0000 s4s4 0000.81 s5s5 0.9000 s6s6 1000 1 1 0 0 00 0 s=s 2, s’=s 3 GOAL REACHED: END OF 4th EPISODE

37 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000 s2s2 0001 s3s3 0000 s4s4 0+0000.81 s5s5 0.9000 s6s6 1000 1 1 0 0 00 0 s=s 4, s’=s 1

38 Q-Learning: Example (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 00 00.9 s2s2 0001 s3s3 0000 s4s4 0000.81 s5s5 0.9000 s6s6 1000 1 1 0 0 00 0 s=s 1, s’=s 2

39 Q-Learning: Esempio (  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 0000.9 s2s2 0001+0 s3s3 0000 s4s4 0000.81 s5s5 0.9000 s6s6 1000 1 1 0 0 00 0 s=s 2, s’=s 3 GOAL REACHED: END OF 5th EPISODE

40 Q-Learning: Example(  =0.9) s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 NSOE s1s1 00.7200.9 s2s2 00.81 1 s3s3 0000 s4s4 00 s5s5 0.900.720.9 s6s6 100.810 1 0.9 0.81 0.72 After several iterations, the algorithm converges to the following table: 1 0.9 0.81 0.9 0.81 0.72

41 Yet another example  Environment A to E: rooms, F: outside building (target). The aim is that an agent learn to get out of building from any of rooms in an optimal way.

42  Modeling of the environment

43 State, Action, Reward and Q-value  Reward matrix

44  Q-table and the update rule Q table update rule: : learning parameter

45 Numerical Example Let us set the value of learning parameter 0.8 and initial state as room B.

46 Episode 1: start from B Look at the second row (state B) of matrix R. There are two possible actions for the current state B, that is to go to state D, or go to state F. By random selection, we select to go to F as our action.

47 Episode 2: start from D This time for instance we randomly have state D as our initial state. From R; it has 3 possible actions, B, C and E. We randomly select to go to state B as our action.

48 Episode 2 (cont ’ d) The next state is B, now become the current state. We repeat the inner loop in Q learning algorithm because state B is not the goal state. There are two possible actions from the current state B, that is to go to state D, or go to state F. By lucky draw, our action selected is state F. No chan ge

49 After Many Episodes If our agent learns more and more experience through many episodes, it will finally reach convergence values of Q matrix as Normalized to percentage

50 Once the Q matrix reaches almost the convergence value, our agent can reach the goal in an optimum way. To trace the sequence of states, it can easily compute by finding action that makes maximum Q for this state. For example from initial State C, using the Q matrix, we can have the sequences C – D – B – F or C-D-E-F

51 The non-deterministic case 51 What if the reward and the state transition are non- deterministic? – e.g. in Backgammon learning and playing depends on rolls of dice! Then V and Q needs redefined by taking expected values Similar reasoning and convergent update iteration will apply

52 Non-deterministic case 52

53 Non-deterministic case 53

54 54 Summary  Reinforcement learning is suitable for learning in uncertain environments where rewards may be delayed and subject to chance  The goal of a reinforcement learning program is to maximise the eventual reward  Q-learning is a form of reinforcement learning that doesn’t require that the learner has prior knowledge of how its actions affect the environment


Download ppt "REINFORCEMENT LEARNING Unsupervised learning 1. 2 So far ….  Supervised machine learning: given a set of annotated istances and a set of categories,"

Similar presentations


Ads by Google