Download presentation
Presentation is loading. Please wait.
1
Multiple timescales for multiagent learning David Leslie and E. J. Collins University of Bristol David Leslie is supported by CASE Research Studentship 00317214 from the UK Engineering and Physical Sciences Research Council in cooperation with BAE SYSTEMS.
2
NIPS 2002 workshop on multiagent learning Introduction n Learning in iterated normal form games. n Simple environment. n Theoretical properties of multiagent Q- learning.
3
NIPS 2002 workshop on multiagent learning Notation n players. n Player plays mixed strategy. n Opponent mixed strategy. n Expected reward for playing is. n estimated by.
4
NIPS 2002 workshop on multiagent learning Mixed strategies n Mixed equilibria necessary. n Mixed strategies from values. n Boltzmann smoothing with fixed temperature parameter.
5
NIPS 2002 workshop on multiagent learning Fixed temperatures n Nash distribution approximates Nash equilibrium. n No discontinuities. n True convergence to mixed strategies.
6
NIPS 2002 workshop on multiagent learning Q-learning n Standard Q-learning, except for division by. n is the indicator function, is the reward. n Learning parameters satisfy
7
NIPS 2002 workshop on multiagent learning Three player pennies Player 1 Player 3 Player 2 1 point if choice matches player 2 1 point if choice matches player 3 1 point if choice is opposite to player 1
8
NIPS 2002 workshop on multiagent learning A plot of Q values
9
NIPS 2002 workshop on multiagent learning Stochastic approximation n Relate to an ODE. n implies values track n Deterministic, continuous time system.
10
NIPS 2002 workshop on multiagent learning Analysis of the example n Unique fixed point. n Small temperatures make fixed point unstable - a periodic orbit is stable. n Explains cycling of values.
11
NIPS 2002 workshop on multiagent learning Multiple timescales - I n Generalise stochastic approximation. n for. n The quicker, the slower the process adapts.
12
NIPS 2002 workshop on multiagent learning Multiple timescales - II n Fast processes can fully adapt to slow processes. n Slow processes see fast processes as having completely converged. n Will work if the fast processes converge to a unique value for each fixed value of the slow processes.
13
NIPS 2002 workshop on multiagent learning Multiple-timescales Q-learning assumption n Assume that for fixed the values of will converge to a unique value, resulting in joint best response. n For example, holds for two-player games and for cyclic games.
14
NIPS 2002 workshop on multiagent learning Convergence of multiple- timescales Q-learning n Behaviour determined by the ODE n Can prove convergence if player 1 has only two actions. n Hence process converges for three player pennies.
15
NIPS 2002 workshop on multiagent learning Another plot of Q values
16
NIPS 2002 workshop on multiagent learning Conclusion n Theoretical study of multiagent learning. n Fixed temperature parameter to achieve mixed equilibria from values. n Multiple timescales assists convergence and enables theoretical study.
17
NIPS 2002 workshop on multiagent learning Future work n Investigate when the convergence assumption must hold. n Experiments with multiple-timescales learning in Markov games. n Theoretical results for multiple-timescales learning in Markov games.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.