Complexity Results about Nash Equilibria

Slides:



Advertisements
Similar presentations
Complexity Results about Nash Equilibria Vincent Conitzer, Tuomas Sandholm International Joint Conferences on Artificial Intelligence 2003 (IJCAI’03) Presented.
Advertisements

Continuation Methods for Structured Games Ben Blum Christian Shelton Daphne Koller Stanford University.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
COMP 553: Algorithmic Game Theory Fall 2014 Yang Cai Lecture 21.
Complexity of manipulating elections with few candidates Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Game Theoretical Insights in Strategic Patrolling: Model and Analysis Nicola Gatti – DEI, Politecnico di Milano, Piazza Leonardo.
Congestion Games with Player- Specific Payoff Functions Igal Milchtaich, Department of Mathematics, The Hebrew University of Jerusalem, 1993 Presentation.
ECO290E: Game Theory Lecture 5 Mixed Strategy Equilibrium.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium
General-sum games You could still play a minimax strategy in general- sum games –I.e., pretend that the opponent is only trying to hurt you But this is.
Game-theoretic analysis tools Necessary for building nonmanipulable automated negotiation systems.
Vincent Conitzer CPS Normal-form games Vincent Conitzer
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
Cooperative/coalitional game theory Vincent Conitzer
by Vincent Conitzer of Duke
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
Rational Learning Leads to Nash Equilibrium Ehud Kalai and Ehud Lehrer Econometrica, Vol. 61 No. 5 (Sep 1993), Presented by Vincent Mak
Computability and Complexity 15-1 Computability and Complexity Andrei Bulatov NP-Completeness.
An Algorithm for Automatically Designing Deterministic Mechanisms without Payments Vincent Conitzer and Tuomas Sandholm Computer Science Department Carnegie.
1 Computing Nash Equilibrium Presenter: Yishay Mansour.
AWESOME: A General Multiagent Learning Algorithm that Converges in Self- Play and Learns a Best Response Against Stationary Opponents Vincent Conitzer.
Complexity of Mechanism Design Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Automated Mechanism Design: Complexity Results Stemming From the Single-Agent Setting Vincent Conitzer and Tuomas Sandholm Computer Science Department.
DANSS Colloquium By Prof. Danny Dolev Presented by Rica Gonen
Segment: Computational game theory Lecture 1b: Complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Minimax strategies, Nash equilibria, correlated equilibria Vincent Conitzer
New Results on Computing in Games Tuomas Sandholm Carnegie Mellon University Computer Science Department Joint work with Vincent Conitzer.
Risk attitudes, normal-form games, dominance, iterated dominance Vincent Conitzer
CPS 170: Artificial Intelligence Game Theory Instructor: Vincent Conitzer.
Mechanisms for Making Crowds Truthful Andrew Mao, Sergiy Nesterko.
CPS 173 Mechanism design Vincent Conitzer
Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Automated Design of Multistage Mechanisms Tuomas Sandholm (Carnegie Mellon) Vincent Conitzer (Carnegie Mellon) Craig Boutilier (Toronto)
Mechanism design for computationally limited agents (previous slide deck discussed the case where valuation determination was complex) Tuomas Sandholm.
Game-theoretic analysis tools Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
Regret Minimizing Equilibria of Games with Strict Type Uncertainty Stony Brook Conference on Game Theory Nathanaël Hyafil and Craig Boutilier Department.
The Science of Networks 6.1 Today’s topics Game Theory Normal-form games Dominating strategies Nash equilibria Acknowledgements Vincent Conitzer, Michael.
Static Games of Incomplete Information
Complexity of Determining Nonemptiness of the Core Vincent Conitzer, Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Automated Mechanism Design Tuomas Sandholm Presented by Dimitri Mostinski November 17, 2004.
Ásbjörn H Kristbjörnsson1 The complexity of Finding Nash Equilibria Ásbjörn H Kristbjörnsson Algorithms, Logic and Complexity.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
CPS 570: Artificial Intelligence Game Theory Instructor: Vincent Conitzer.
Automated mechanism design Vincent Conitzer
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
5.1.Static Games of Incomplete Information
Definition and Complexity of Some Basic Metareasoning Problems Vincent Conitzer and Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Lec 23 Chapter 28 Game Theory.
By: Donté Howell Game Theory in Sports. What is Game Theory? It is a tool used to analyze strategic behavior and trying to maximize his/her payoff of.
Game representations, game-theoretic solution concepts, and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Computing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains Vincent Conitzer, Tuomas Sandholm Computer.
Automated mechanism design
A useful reduction (SAT -> game)
Bayesian games and mechanism design
Complexity of Determining Nonemptiness of the Core
CPS Mechanism design Michael Albert and Vincent Conitzer
A useful reduction (SAT -> game)
Communication Complexity as a Lower Bound for Learning in Games
Multiagent Systems Extensive Form Games © Manfred Huber 2018.
Vincent Conitzer Mechanism design Vincent Conitzer
Vincent Conitzer CPS 173 Mechanism design Vincent Conitzer
Automated mechanism design
Multiagent Systems Repeated Games © Manfred Huber 2018.
Molly W. Dahl Georgetown University Econ 101 – Spring 2009
Normal Form (Matrix) Games
A Technique for Reducing Normal Form Games to Compute a Nash Equilibrium Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University, Computer Science.
Presentation transcript:

Complexity Results about Nash Equilibria Vincent Conitzer and Tuomas Sandholm Computer Science Department Carnegie Mellon University

Complexity of equilibrium concepts from (noncooperative) game theory Solutions are less useful if they cannot be determined So, their computational complexity is important Early research studied complexity of board games E.g. chess, Go Complexity results here usually depend on structure of game (allowing for concise representation) Hardness result => exponential in the size of the representation Usually zero-sum, alternating move Real-world strategic settings are much richer Concise representation for all games is impossible Not necessarily zero-sum/alternating move Sophisticated agents need to be able to deal with such games…

Why study computational complexity in game theory? To code strategic software agents To analyze whether a solution concept has any chance of occurring in the real world If it is too hard to compute, it will not occur

Complexity of Nash equilibria Game theory defines General representations of games The best known is a matrix game Solution concepts for these games Most central is Nash equilibrium A vector of strategies for players so that no player wants to change Any finite game has a mixed-strategy Nash equilibrium [Nash 1950] A mixed strategy is a probability distribution over actions Complexity of finding a mixed-strategy Nash equilibrium? Still an open question! “Together with factoring … the most important concrete open question on the boundary of P today” [Papadimitriou 2001] We solved several related questions… All of our results are for standard matrix representations None of the hardness derives from fancier representations Any fancier representation has to deal with these hardness results

Nash equilibria Game theory defines General representations of games The best known is a matrix game Solution concepts for these games Most central is Nash equilibrium A vector of strategies for players so that no player wants to change Any finite game has a mixed-strategy Nash equilibrium [Nash 1950] A mixed strategy is a probability distribution over actions

Nash equilibrium: example dominates 50% 50% 0% 1,2 2,1 6,0 7,0 0,6 0,7 5,5 50% dominates 50% 0%

Nash equilibrium: another example Audience 90% 100% 0% 10% 0% 100% Vince Pay attention Don’t pay attention 100% Put effort into presentation 4,4 -2,0 -14,-16 0,0 0% 80% Don’t put effort into presentation 0% 100% 20%

Complexity results Complexity of finding a mixed-strategy Nash equilibrium? Still an open question! “Together with factoring … the most important concrete open question on the boundary of P today” [Papadimitriou 2001] We solved several related questions… Hardness of finding certain types of equilibrium Hardness of finding equilibria in more general game representations Bayesian games, Markov games All of our results are for standard matrix representations None of the hardness derives from fancier representations, such as graphical games Any fancier representation has to deal with at least these hardness results

A useful reduction (SAT -> game) Theorem. SAT-solutions correspond to mixed-strategy equilibria of the following game (each agent randomizes uniformly on support) SAT Formula: (x1 or -x2) and (-x1 or x2 ) Solutions: x1=true, x2=true x1=false,x2=false Game: x1 x2 +x1 -x1 +x2 -x2 (x1 or -x2) (-x1 or x2) default x1 -2,-2 -2,-2 0,-2 0,-2 2,-2 2,-2 -2,-2 -2,-2 -2,1 x2 -2,-2 -2,-2 2,-2 2,-2 0,-2 0,-2 -2,-2 -2,-2 -2,1 +x1 -2,0 -2,2 1,1 -2,-2 1,1 1,1 -2,0 -2,2 -2,1 -x1 -2,0 -2,2 -2,-2 1,1 1,1 1,1 -2,2 -2,0 -2,1 +x2 -2,2 -2,0 1,1 1,1 1,1 -2,-2 -2,2 -2,0 -2,1 -x2 -2,2 -2,0 1,1 1,1 -2,-2 1,1 -2,0 -2,2 -2,1 (x1 or -x2) -2,-2 -2,-2 0,-2 2,-2 2,-2 0,-2 -2,-2 -2,-2 -2,1 (-x1 or x2) -2,-2 -2,-2 2,-2 0,-2 0,-2 2,-2 -2,-2 -2,-2 -2,1 default 1,-2 1,-2 1,-2 1,-2 1,-2 1,-2 1,-2 1,-2 0,0 Proof sketch: Playing opposite literals (with any probability) is unstable If you play literals (with probabilities), you should make sure that for any clause, the probability of the literal being in that clause is high enough, and for any variable, the probability that the literal corresponds to that variable is high enough (otherwise the other player will play this clause/variable and hurt you) So equilibria where both randomize over literals can only occur when both randomize over same SAT solution Note these are the only “good” equilibria

Complexity of mixed-strategy Nash equilibria with certain properties This reduction implies that there is an equilibrium where players get expected utility 1 each iff the SAT formula is satisfiable Any reasonable objective would prefer such equilibria to 0-payoff equilibrium Corollary. Deciding whether a “good” equilibrium exists is NP-hard: 1. equilibrium with high social welfare 2. Pareto-optimal equilibrium 3. equilibrium with high utility for a given player i 4. equilibrium with high minimal utility Also NP-hard (from the same reduction): 5. Does more than one equilibrium exists? 6. Is a given strategy ever played in any equilibrium? 7. Is there an equilibrium where a given strategy is never played? (5) & weaker versions of (4), (6), (7) were known [Gilboa, Zemel GEB-89] All these hold even for symmetric, 2-player games

Counting the number of mixed-strategy Nash equilibria Why count equilibria? If we cannot even count the equilibria, there is little hope of getting a good overview of the overall strategic structure of the game Unfortunately, our reduction implies: Corollary. Counting Nash equilibria is #P-hard! Proof. #SAT is #P-hard, and the number of equilibria is 1 + #SAT Corollary. Counting connected sets of equilibria is just as hard Proof. In our game, each equilibrium is alone in its connected set These results hold even for symmetric, 2-player games

Complexity of finding pure-strategy equilibria Pure strategy equilibria are nice Avoids randomization over strategies between which players are indifferent In a matrix game, it is easy to find pure strategy equilibria Can simply look at every entry and see if it is a Nash equilibrium Are pure-strategy equilibria easy to find in more general game structures? Games with private information Multistage games In such games, often the space of all possible strategies is no longer polynomial

Bayesian games In Bayesian games, players have private information about their preferences (utility function) about outcomes This information is called a type In a more general variant, may also have information about others’ payoffs Our hardness result generalizes to this setting There is a commonly known prior over types Players can condition their strategies on their type With 2 actions there are 2^{# types} pure strategy combinations In a Bayes-Nash equilibrium, each player’s strategy (for every type) is a best response to the other players’ strategies In expectation with respect to the prior

Bayesian games: example Player 1, type 1 Probability .6 Player 1, type 2 Probability .4 2,* 4,* 1,* 10,* 5,* Player 2, type 1 Probability .7 Player 2, type 2 Probability .3 *,1 *,2 *,1 *,2 *,10

Complexity of Bayes-Nash equilibria Theorem. Deciding whether a pure-strategy Bayes-Nash equilibrium exists is NP-complete Proof sketch. (easy to make the game symmetric) Each of player 1’s strategies, even if played with low probability, makes some of player 2’s strategies unappealing to player 2 With these, player 1 wants to “cover” all of player 2’s strategies that are bad for player 1. But player 1 can only play so many strategies (one for each type) This is SET-COVER

Complexity of Nash equilibria in stochastic (Markov) games We now shift attention to games with multiple stages Some NP-hardness results have already been shown here Ours is the first PSPACE-hardness result (to our knowledge) PSPACE-hardness results from e.g. Go do not carry over Go has an exponential number of states For general representation, we need to specify states explicitly We focus on Markov games

Stochastic (Markov) games A Markov game is defined as follows: At each stage, the game is in a given state Each state has its own matrix game associated with it For every state, for every combination of pure strategies, there are transition probabilities to the other states The next stage’s state will be chosen according to these probabilities There is a discount factor d<1 that determines how much players care about the future Player j’s total utility = sum_i d^i u_ij where u_ij is is player j’s utility in stage I A number N of stages (possibly infinity) All of the following may or may not or may partially be known to the players during game play: The current/past states The others’ past actions Past payoffs

Markov Games: example S1 5,5 0,6 6,0 1,1 S3 2,1 1,2 S2 2,1 0,0 1,2 .2 .1 .3 2,1 1,2 .5 .3 .6 .1 S2 .1 2,1 0,0 1,2 .8

Complexity of Nash equilibria in stochastic (Markov) games… Strategy spaces here are rich (agents can condition on past events) So maybe high-complexity results are not surprising, but … High complexity even when players cannot condition on anything! No feedback from the game: the players are playing “blindly” Theorem. Even under this restriction, deciding whether a pure-strategy Nash equilibrium exists is PSPACE-hard even if game is 2-player, symmetric, and transition process is deterministic Proof sketch. Reduction is from PERIODIC-SAT, where an infinitely repeating formula must be satisfied [Orlin, 81] Theorem. NP-hard even if game has a finite number of stages

Conclusions Analyzing the computability of game-theoretic concepts is crucial Key for coding strategic software agents Concepts that are too hard to compute are unlikely to model real-world behavior Many complexity questions in game theory remain open How hard is it to construct a Nash equilibrium? How do we make this work if the strategy space is intractable? Go see the poker-playing distinguished paper (Wednesday 10:30) What about other, possibly more concise, representations? E.g., graphical games Next two talks! Any general representation has to deal at least with these hardness results For what restricted classes of games is strategic analysis easier? What about other concepts from game theory? Come see our talk about the core (today 4pm)

Thank you for your attention!