Download presentation
Presentation is loading. Please wait.
Published bySimon Townsend Modified over 9 years ago
1
Designing Games for Distributed Optimization Na Li and Jason R. Marden IEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp. 230-242, 2013 Presenter: Seyyed Shaho Alaviani Designing Games for Distributed Optimization Na Li and Jason R. Marden IEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp. 230-242, 2013 Presenter: Seyyed Shaho Alaviani
2
Introduction -advantages of game theory Problem Formulation and Preliminaries - potential games -state based potential games -stationary state Nash equilibrium Main Results - state based game design -analytical properties of designed game -learning algorithm Numerical Examples Conclusions
3
Network -Consensus -Rendezvous -Formation -Schooling -Flocking All: special cases of distributed optimization
4
Game Theory: a powerful tool for the design and control of multi agent systems Using game theory requires two steps: 1- modelling the agent as self-interested decision maker in a game theoretical environment: defining a set of choices and a local objective function for each decision maker 2- specifying a distributed learning algorithm that enables the agents to reach a Nash equilibrium of the designed game Introduction
5
Core advantage of game theory: It provides a hierarchical decomposition between the distribution and optimization problem (game design) and the specific local decision rules (distributed learning algorithm) Example: Lagrangian The goal of this paper: To establish a methodology for the design of local agent objective functions that leads to desirable system-wide behavior
6
Connected and disconnected graphs Directed and undirected graphs connecteddisconnected directed undirected Graph
7
Problem Formulation and Preliminaries
8
Physics:
9
Main properties of potential games: 1- a PSNE is guaranteed to exist 2- there are several distributed learning algorithms with proven asymptotic guarantees 3- learning PSNE in potential games is robust: heterogeneous clock rates and informational delays are not problematic
10
Stochastic games( L. S. Shapley, 1953): In a stochastic game the play proceeds by steps from position to position, according to transition probabilities controlled jointly by two players. State Based Potential Games(J. Marden, 2012): A simplification of stochastic games that represents and extension to strategic form games where an underlying state space is introduced to the game theoretic environment
12
State Based Game Design: The goal is to establish a state based game formulation for our distributed optimization problem that satisfies the following properties: Main Results
13
A State Based Game Design for Distributed Optimization: -State Space -Action sets -State dynamics -Invariance associated with state dynamics -Agent cost functions
14
State Space:
18
Agent cost functions :
19
Analytical Properties of Designed Game Theorem 2 shows that the designed game is a state based potential game. Theorem 2: The state based game is a state based potential game with potential function
20
Theorem 3 shows that all equilibria of the designed game are solutions to the optimization problem.
22
Question: Could the results in Theorem 2 and 3 have been attained using framework of strategic form games? impossible
23
Learning Algorithm We prove that the learning algorithm gradient play converges to a stationary state NE. Assumptions:
24
asymptotically converges to a stationary state NE.
25
Example 1: Consider the following function to be minimized Numerical Examples
27
Example 2: Distributed Routing Problem source destination m routes Application: the Internet Amount traffic Percentage of traffic that agent i designates to route r
28
Then total congestion in the network will be
29
R=5 N=10 Communication graph
30
Conclusions: -This work presents an approach to distributed optimization using the framework of state based potential games. -We provide a systematic methodology for localizing the agents’ objective functions while ensuing that the resulting equilibria are optimal with regards to the system level objective function. -It is proved that the learning algorithm gradient play guarantees convergence to a stationary state NE in any state based potential game -Robustness of the approach
31
MANY THANKS FOR YOUR ATTENTION
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.