Download presentation
Presentation is loading. Please wait.
2
Outline MDP (brief) –Background –Learning MDP Q learning Game theory (brief) –Background Markov games (2-player) –Background –Learning Markov games Littman’s Minimax Q learning (zero-sum) Hu & Wellman’s Nash Q learning (general-sum)
3
/ SG / POSG Stochastic games (SG) Partially observable SG (POSG)
4
Immediate reward Expectation over next states Value of next state
5
Model-based reinforcement learning: 1.Learn the reward function and the state transition function 2.Solve for the optimal policy Model-free reinforcement learning: 1.Directly learn the optimal policy without knowing the reward function or the state transition function
6
#times action a has been executed in state s #times action a causes state transition s s’ Total reward accrued when applying a in s
7
v(s’)
8
1.Start with arbitrary initial values of Q(s,a), for all s S, a A 2.At each time t the agent chooses an action and observes its reward r t 3.The agent then updates its Q-values based on the Q- learning rule 4.The learning rate t needs to decay over time in order for the learning algorithm to converge
10
Famous game theory example
13
A co-operative game
16
Mixed strategy Generalization of MDP
18
Stationary: the agent’s policy does not change over time Deterministic: the same action is always chosen whenever the agent is in state s
19
Example 01 01 1 0 1 1 211 121 112 State 1 State 2 11 11
20
v(s, *) v(s, ) for all s S,
21
Max V Such that: rock + paper + scissors = 1
22
Best response Worst case Expectation over all actions
24
Quality of a state-action pair Discounted value of all succeeding states weighted by their likelihood Discounted value of all succeeding states This learning rule converges to the correct values of Q and v
25
eplor controls how often the agent will deviate from its current policy Expected reward for taking action a when opponent chooses o from state s
31
Hu and Wellman general-sum Markov games as a framework for RL Theorem (Nash, 1951) There exists a mixed strategy Nash equilibrium for any finite bimatrix game
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.