Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oblivious Equilibrium for Stochastic Games with Concave Utility

Similar presentations


Presentation on theme: "Oblivious Equilibrium for Stochastic Games with Concave Utility"— Presentation transcript:

1 Oblivious Equilibrium for Stochastic Games with Concave Utility
Sachin Adlakha, Ramesh Johari, Gabriel Weintraub and Andrea Goldsmith DARPA ITMANET Meeting March 5-6, 2009

2 ACHIEVEMENT DESCRIPTION
Oblivious equilibrium for stochastic games with concave utility S. Adlakha, R. Johari, G. Weintraub, A. Goldsmith IMPACT NEXT-PHASE GOALS ACHIEVEMENT DESCRIPTION STATUS QUO NEW INSIGHTS MAIN RESULT: Consider stochastic games per-period utility and state dynamics that are increasing, concave, submodular. Then in a large system, each node can find approximately optimal policies by treating the state of other nodes as constant. HOW IT WORKS: Under our assumptions, no single node is overly influential ) we can replace other nodes’ states by their mean. So the optimal policies decouple between nodes. ASSUMPTIONS AND LIMITATIONS: This result holds under much more general technical assumptions than our early results on the problem. A key modeling limitation, however, is that the limit requires all nodes to interact with each other. Thus the results apply only to dense networks. Our results provide a general framework to study the interaction of multiple devices. Further, our results: unify existing models for which such limits were known and provide simple exogenous conditions that can be checked to ensure the main result holds Utility Next state Current state or current action Current state or current action Many cognitive radio models do not account for reaction of other devices to a single device’s action. In prior work, we developed a general stochastic game model to tractably capture interactions of many devices. # of other devices with given state In principle, tracking state of other devices is complex. We approximate state of other devices via a mean field limit. State of device i State of other devices Action of device i State We will apply our results to a model of interfering transmissions among energy-constrained devices. Our main goal is to develop a related model that applies when a single node interacts with a small number of other nodes each period. What technical challenge is being undertaken on behalf of the project Answer - In this project we aim to understand competition among wireless nodes in dynamic settings. In particular, we are interested in understanding mean field approximation to large scale wireless games in a reactive environment. 2. Why is it hard and what are the open problems Answer - The standard game theoretic techniques for dynamic games are computationally prohibitive and require information flow between nodes, something that is hard in practice. 3. How has this problem been addressed in the past Answer - Most studies related to cognitive radios have focused on either static environment or have restricted attention to small toy problems with few nodes. 4. What new intellectual tools are being brought to bear on the problem Answer - We have generalized the concept of oblivious equilibrium to large class of stochastic games. The wireless games form an interesting subclass of these games. 5. What is the main intermediate achievement Answer - In earlier work we had results on special cases, including linear dynamic-quadratic cost models. Our result now generalizes and extends all our prior results, and unifies them under a single framework with easily verifiable assumptions. The main achievement has been isolating a set of conditions on model primitives under which oblivious equilibrium can approximate the (more computationally difficult) Markov perfect equilibrium. 6. How and when does this achievement align with the project roadmap (end-of-phase or end-of-project goal) Answer - A key end-of-phase goal was to generalize our models to handle much more complex scenarios of interaction. Our results directly address this goal. 7. What are the even long-term objectives and consequences? Answer - Our model is fairly general, and so one immediate goal is to exploit the model structure for models of interaction among energy-constrained nodes. However, the more important long-term objective is to develop models where the number of nodes is large, but a single node interacts with a limited subset of other nodes at any given time. 8. Which thrusts and SOW tasks does this contribution fit under and why? Answer - This fits under Thrust 3, “Application Metrics and Network Performance”, and specifically provides tools to ensure such systems are robust against noncooperative interactions between mobiles, as well as to ensure distributed coordination. Real environments are reactive and non-stationary; this requires new game-theoretic models of interaction

3 Wireless environments are reactive
Scenario: Wireless devices sharing same spectrum. Typical Approach: Assume that the environment is non-reactive. Flawed assumption at best: In cognitive radio networks, the environment consists of other cognitive radios – hence is highly reactive Questions: How do we design policies for such networks? What is the performance loss if we assume non-reactive environments?

4 Foundational theory – Markov Perfect Equilibrium
State of player i State of other players Action of player i Model such reactive environments as stochastic dynamic games. Key solution concept is that of Markov perfect equilibrium (MPE). The action of each player depends on the state of everyone. Problems: Tracking state of everyone else is hard. MPE is hard to compute.

5 Foundational Theory – Oblivious Equilibrium
State of player i Action of player i Average state of other players Oblivious policies – Each player reacts to only average state of other players Easy to compute and implement. Requires little information exchange. Question: When is oblivious equilibrium close to MPE?

6 Our model m players State of player i is xi; action of player i is ai
State evolution: Payoff: where f-i = empirical distribution of other players’ states state # of players

7 MPE and OE A Markov policy is a decision rule based on the current state and the empirical distribution: ai, t = ¹(xi, t, f-i, t(m)) A Markov perfect equilibrium is a vector of Markov policies, where each player has maximized present discounted payoff, given policies of other players. In an oblivious policy, a player responds instead to x-i, t and only the long run average f-i(m). In an oblivious equilibrium each player has maximized present discounted payoff using an oblivious policy, given long run average state induced by other players’ policies.

8 Prior Work Generalized the idea of OE to general stochastic games [Allerton 07]. Unified existing models, such as LQG games, via our framework [CDC 08]. Exogenous conditions for approximating MPE using OE for linear dynamics and separable payoffs [Allerton 08]. Current Results: We have a general set of exogenous conditions (including nonlinear dynamics and nonseparable payoffs) under which OE is a good approximation to MPE. These conditions also unify our previous results and existing models.

9 Assumptions [A1] The state transition function is concave in state and action and has decreasing differences in state and action. [A2] For any action, is a non-increasing function of state and eventually becomes negative. [A3] The payoff function is jointly concave in state and action and has decreasing differences in state and action. [A4] The logarithm of the payoff is Gateaux differentiable w.r.t. f-i. [A5] MPE and OE exist. [A6] We restrict attention to policies that make the individual state Markov chain recurrent and keep the discounted sum of the square of the payoff finite.

10 Assumptions Define g(y) can be interpreted as the maximum rate of change of the logarithm of the payoff function w.r.t a small change in fraction of players at state y. [A7] We assume that the payoff function is such that g(y) » O(yK) for some K. [A8] We assume that there exists a constant C such that the payoff function satisfies the following condition

11 Main Result Under [A1]-[A8], oblivious equilibrium payoff is approximately optimal over Markov policies, as m  1. In other words, OE is approximately an MPE. The key point here is that no single player is overly influential and the true state distribution is close to the time average—so knowledge of other player’s policies does not significantly improve payoff. Advantage: Each player can use oblivious policy without loss in performance.

12 Main Contributions and Future Work
Provides a general framework to study the interaction of multiple devices. Provides exogenous conditions which can be easily checked to ensure the main result holds. Unifies existing models for which such limits are known. Future Work: Apply this model to interfering transmissions between energy constrained nodes. Develop similar models where a single node interacts with a small set of nodes at each time period.


Download ppt "Oblivious Equilibrium for Stochastic Games with Concave Utility"

Similar presentations


Ads by Google