LECTURE 6: MULTIAGENT INTERACTIONS

Slides:



Advertisements
Similar presentations
Concepts of Game Theory I
Advertisements

Concepts of Game Theory II. 2 The prisioners reasoning… Put yourself in the place of prisoner i (or j)… Reason as follows: –Suppose I cooperate… If j.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
Multiagent Interactions
Non-Cooperative Game Theory To define a game, you need to know three things: –The set of players –The strategy sets of the players (i.e., the actions they.
6-1 LECTURE 6: MULTIAGENT INTERACTIONS An Introduction to MultiAgent Systems
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Multi-player, non-zero-sum games
An Introduction to Game Theory Part I: Strategic Games
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
Game Theory Analysis Sidney Gautrau. John von Neumann is looked at as the father of modern game theory. Many other theorists, such as John Nash and John.
Slide 1 of 13 So... What’s Game Theory? Game theory refers to a branch of applied math that deals with the strategic interactions between various ‘agents’,
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site
Introduction to Game Theory and Behavior Networked Life CIS 112 Spring 2009 Prof. Michael Kearns.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
THE “CLASSIC” 2 x 2 SIMULTANEOUS CHOICE GAMES Topic #4.
LECTURE 6: MULTIAGENT INTERACTIONS
Chapters 29, 30 Game Theory A good time to talk about game theory since we have actually seen some types of equilibria last time. Game theory is concerned.
The Science of Networks 6.1 Today’s topics Game Theory Normal-form games Dominating strategies Nash equilibria Acknowledgements Vincent Conitzer, Michael.
Lecture 5 Introduction to Game theory. What is game theory? Game theory studies situations where players have strategic interactions; the payoff that.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Strategic Behavior in Business and Econ Static Games of complete information: Dominant Strategies and Nash Equilibrium in pure and mixed strategies.
Mohsen Afsharchi Multiagent Interaction. What are Multiagent Systems?
1 Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. Franz J. Kurfess CPE/CSC 580: Intelligent Agents 1.
Arguments for Recovering Cooperation Conclusions that some have drawn from analysis of prisoner’s dilemma: – the game theory notion of rational action.
Advanced Subjects in GT Outline of the tutorials Static Games of Complete Information Introduction to games Normal-form (strategic-form) representation.
Chapter 12 Game Theory Presented by Nahakpam PhD Student 1Game Theory.
Shane Murphy ECON 102 Tutorial: Week 8 Shane Murphy
What Is Oligopoly? Oligopoly is a market structure in which
Game Theory By Ben Cutting & Rohit Venkat.
Game theory Chapter 28 and 29
Game theory basics A Game describes situations of strategic interaction, where the payoff for one agent depends on its own actions as well as on the actions.
Q 2.1 Nash Equilibrium Ben
Chapter 28 Game Theory.
Game theory (Sections )
Mixed Strategies Keep ‘em guessing.
Intermediate Microeconomics
Chapter 15: Game Theory: The Mathematics Lesson Plan of Competition
Nash Equilibrium: Theory
Rational Choice Sociology
LECTURE 6: MULTIAGENT INTERACTIONS
Simultaneous Move Games: Discrete Strategies
Introduction to Game Theory
Game theory Chapter 28 and 29
Oligopoly & Game Theory Lecture 27
Decision Theory and Game Theory
Multiagent Systems Game Theory © Manfred Huber 2018.
Computer-Mediated Communication
Game Theory Chapter 12.
Learning 6.2 Game Theory.
LECTURE 6: MULTIAGENT INTERACTIONS
Chapter 30 Game Applications.
Chapter 29 Game Theory Key Concept: Nash equilibrium and Subgame Perfect Nash equilibrium (SPNE)
Chapter 14 & 15 Repeated Games.
Game Theory and Strategic Play
Chapter 14 & 15 Repeated Games.
Value Based Reasoning and the Actions of Others
Molly W. Dahl Georgetown University Econ 101 – Spring 2009
Chapter 15: Game Theory: The Mathematics Lesson Plan of Competition
Game Theory: Nash Equilibrium
Lecture Game Theory.
Game Theory: The Nash Equilibrium
Instructor: Vincent Conitzer
Introduction to Game Theory
Lecture 10 Coordination and Reputation
Introduction to Game Theory
Lecture 8 Nash Equilibrium
Presentation transcript:

LECTURE 6: MULTIAGENT INTERACTIONS An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas Chapter 11 in the Second Edition Chapter 6 in the First Edition

What are Multiagent Systems?

MultiAgent Systems Thus a multiagent system contains a number of agents… …which interact through communication… …are able to act in an environment… …have different “spheres of influence” (which may coincide)… …will be linked by other (organizational) relationships

Utilities and Preferences Assume we have just two agents: Ag = {i, j} Agents are assumed to be self-interested: they have preferences over how the environment is Assume W = {w1, w2, …}is the set of “outcomes” that agents have preferences over We capture preferences by utility functions: Utility functions lead to preference orderings over outcomes:

What is Utility? Utility is not money (but it is a useful analogy) Typical relationship between utility & money:

Risk Neutral A bet of $100-or-nothing, vs. $50 Risk neutral agent is indifferent between these options CE - Certainty equivalent; E(U(W)) - Expected value of the utility (expected utility) of the uncertain payment; E(W) - Expected value of the uncertain payment; U(CE) - Utility of the certainty equivalent; U(E(W)) - Utility of the expected value of the uncertain payment; U(W0) - Utility of the minimal payment; U(W1) - Utility of the maximal payment; W0 - Minimal payment; W1 - Maximal payment; RP - Risk premium From Wikipedia, http://en.wikipedia.org/wiki/Risk_aversion

Risk Averse (risk avoiding) A bet of $100-or-nothing, vs. (for example) $40 Risk averse agent prefers $40 sure thing CE - Certainty equivalent; E(U(W)) - Expected value of the utility (expected utility) of the uncertain payment; E(W) - Expected value of the uncertain payment; U(CE) - Utility of the certainty equivalent; U(E(W)) - Utility of the expected value of the uncertain payment; U(W0) - Utility of the minimal payment; U(W1) - Utility of the maximal payment; W0 - Minimal payment; W1 - Maximal payment; RP - Risk premium From Wikipedia, http://en.wikipedia.org/wiki/Risk_aversion

Risk Affine (Risk Seeking) A bet of $100-or-nothing, vs. (for example) $60 Risk seeking agent prefers the bet CE - Certainty equivalent; E(U(W)) - Expected value of the utility (expected utility) of the uncertain payment; E(W) - Expected value of the uncertain payment; U(CE) - Utility of the certainty equivalent; U(E(W)) - Utility of the expected value of the uncertain payment; U(W0) - Utility of the minimal payment; U(W1) - Utility of the maximal payment; W0 - Minimal payment; W1 - Maximal payment; RP - Risk premium From Wikipedia, http://en.wikipedia.org/wiki/Risk_aversion

Human Approaches to “Utility” Behavioral Economists have studied how human attitudes towards utility contradict classic economic notions of utility (and rationality, utility maximization) These issues are also important for agents who would need to interact with people in natural ways (or for multiagent systems that include people as some of the agents)

Kahneman (and Tversky) Scenario 1: You go to a concert with $200 in your wallet, and a ticket to the concert that cost $100. When getting there, you discover that you lost a $100 bill from your wallet (so you have remaining $100, and the ticket). Do you go into the concert?

Kahneman (and Tversky) Scenario 2: You go to a concert with $200 in your wallet, and a ticket to the concert that cost $100. When getting there, you discover that you lost the ticket from your wallet (so you have remaining $200, and no ticket). Do you buy another ticket and go into the concert?

Kahneman (and Tversky) People remember the end of painful events more vividly Scenario 1: hand put into very cold (painfully cold) water for 2 minutes, then pulled out. Scenario 2: hand put into very cold (painfully cold) water for 2 minutes, then water is heated for 1 minute (lessening pain), and hand then pulled out. People prefer the second scenario, though the amount of pain is a superset of the first

Ariely Which subscription do you prefer? Has The Economist’s sales department gone crazy? From http://danariely.com

Ariely “…humans rarely choose things in absolute terms. We don’t have an internal value meter that tells us how much things are worth. Rather, we focus on the relative advantage of one thing over another, and estimate value accordingly.” From http://danariely.com

Ariely Ask stranger to help unload sofa from a truck, for free; many agree Ask stranger to help unload sofa from a truck, for $1; most do not The second scenario seems to provide the utility of the first, plus more, though “obviously”, it doesn’t

Ultimatum Game Two players Player 1 is given $100, told to offer Player 2 some of it. If Player 2 accepts, they divide the $100 according to the offer; if Player 2 does not accept, they both get nothing Player 1 offers Player 2 $1 Does Player 2 accept? Is it rational not to accept?

Multiagent Encounters We need a model of the environment in which these agents will act… agents simultaneously choose an action to perform, and as a result of the actions they select, an outcome in W will result the actual outcome depends on the combination of actions assume each agent has just two possible actions that it can perform, C (“cooperate”) and D (“defect”) Environment behavior given by state transformer function:

Multiagent Encounters Here is a state transformer function: (This environment is sensitive to actions of both agents.) Here is another: (Neither agent has any influence in this environment.) And here is another: (This environment is controlled by j.)

Rational Action Suppose we have the case where both agents can influence the outcome, and they have utility functions as follows: With a bit of abuse of notation: Then agent i’s preferences are: “C” is the rational choice for i. (Because i prefers all outcomes that arise through C over all outcomes that arise through D.)

An Aside on “Rationality” The term “rational” is used imprecisely as a synonym of “reasonable” But “rational” is used (precisely) to mean “utility maximizer”, given the alternatives Since humans’ utility functions are difficult to elicit, one way of determining their functions is to see what people choose (under the assumption that they are rational) Then behavioral economists can show forms of behavior that seem “irrational”

“Rational” in Common Usage The New York Times, April 26, 2012, “Defense Minister Adds to Israel’s Recent Mix of Messages on Iran”, by Jodi Rudoren “General Gantz described the Iranian government as ‘very rational.’ Mr. Netanyahu had told CNN on Tuesday that he would not count ‘on Iran’s rational behavior.’ ” “Mr. Barak said he thought it unlikely that the sanctions would succeed and that he did not see Iran as ‘rational in the Western sense of the word, meaning people seeking a status quo and the outlines of a solution to problems in a peaceful manner.’ ” “Dore Gold…said the apparent disagreement on rationality could be explained: ‘The Iranians have irrational goals, which they may try and advance in a rational way.’ ”

Payoff Matrices We can characterize the previous scenario in a payoff matrix: Agent i is the column player Agent j is the row player

Solution Concepts How will a rational agent will behave in any given scenario? Answered in solution concepts: dominant strategy Nash equilibrium strategy Pareto optimal strategies strategies that maximize social welfare

Dominant Strategies We will say that a strategy si is dominant for player i if no matter what strategy sj agent j chooses, i will do at least as well playing si as it would doing anything else Unfortunately, there isn’t always a unique undominated strategy

(Pure Strategy) Nash Equilibrium In general, we will say that two strategies s1 and s2 are in Nash equilibrium if: under the assumption that agent i plays s1, agent j can do no better than play s2; and under the assumption that agent j plays s2, agent i can do no better than play s1. Neither agent has any incentive to deviate from a Nash equilibrium Unfortunately: Not every interaction scenario has a (pure strategy) Nash equilibrium Some interaction scenarios have more than one Nash equilibrium

The Assurance Game What are the (pure strategy) Nash equilibria? D 4, 4 1, 2 2, 1 3, 3 CC and DD are both NEs; intuitively, CC seems like a more likely outcome, but NE doesn’t distinguish between them

Matching Pennies Players i and j simultaneously choose the face of a coin, either “heads” or “tails” If they show the same face, then i wins, while if they show different faces, then j wins The Matching Pennies Payoff Matrix:

Mixed Strategies for Matching Pennies NO pair of strategies forms a pure strategy Nash Equilibrium (NE): whatever pair of strategies is chosen, somebody will wish they had done something else The solution is to allow mixed strategies: play “heads” with probability 0.5 play “tails” with probability 0.5 This is a NE strategy

Mixed Strategies A mixed strategy has the form play α1 with probability p1 play α2 with probability p2 … play αk with probability pk such that p1 + p2 + ··· + pk =1

Nash’s Theorem Nash proved that every finite game has a Nash equilibrium in mixed strategies. (Unlike the case for pure strategies.) So this result overcomes the lack of solutions; but there still may be more than one Nash equilibrium…

The Assurance Game What are the (mixed strategy) Nash equilibria? 4, 4 1, 2 2, 1 3, 3 CC and DD are both NEs; so is “play C and D with equal probability”, which results in an expected payoff of 2.5 for both players. From http://www.le.ac.uk/psychology/amc/reasabou.pdf

Rock-paper-scissors Normal form of game (zero-sum interaction) Rock 0, 0 -1, 1 1, -1 How should you play? Use a mixed strategy of equal probabilities for the 3 choices. From Wikipedia, http://en.wikipedia.org/wiki/Rock-paper-scissors

So why do these web sites (and books) exist? Hint: your opponent is not an idealized rational agent, or even a computer.

Rock-paper-scissors-lizard-Spock Expanded form of game: what’s the optimal strategy? From Wikipedia, http://en.wikipedia.org/wiki/Rock-paper-scissors-lizard-Spock

Unstable Equilibria C D 3, 3 4, 2 5, 1 This game has a unique mixed-strategy equilibrium point: row player plays (2/3 C, 1/3 D), and column player plays (1/3 C, 2/3 D) But if row player expects column player to play that strategy, the row player can deviate with no penalty (no incentive to deviate, but no penalty) From http://www.le.ac.uk/psychology/amc/reasabou.pdf

Strong Nash Equilibrium A Nash equilibrium where no coalition can cooperatively deviate in a way that benefits all members, assuming that non-member actions are fixed Defined in terms of all possible coalitional deviations, rather than all possible unilateral deviations

Sub-game Perfect Equilibrium There are two equilibrium points, CC and DD Consider the extensive game (not identical to the normal form game, which is only here for the intuition…) where player I makes the first move, CC is a subgame perfect equilibrium, while DD is not “A subgame-perfect equilibrium is one that induces payoff-maximizing choices in every branch or subgame of its extensive form.” From http://www.le.ac.uk/psychology/amc/reasabou.pdf

Pareto Optimality An outcome is said to be Pareto optimal (or Pareto efficient) if there is no other outcome that makes one agent better off without making another agent worse off If an outcome is Pareto optimal, then at least one agent will be reluctant to move away from it (because this agent will be worse off)

Pareto Optimality If an outcome ω is not Pareto optimal, then there is another outcome ω￿’ that makes everyone as happy, if not happier, than ω “Reasonable” agents would agree to move to ω’ in this case. (Even if I don’t directly benefit from ω’￿, you can benefit without me suffering.)

The Assurance Game What are the (mixed strategy) Nash equilibria? 4, 4 1, 2 2, 1 3, 3 CC and DD are both NEs; but DD is not Pareto Optimal

Social Welfare The social welfare of an outcome ω is the sum of the utilities that each agent gets from ω: Think of it as the “total amount of money in the system” As a solution concept, may be appropriate when the whole system (all agents) has a single owner (then overall benefit of the system is important, not individuals).

Competitive and Zero-Sum Interactions Where preferences of agents are diametrically opposed we have strictly competitive scenarios Zero-sum encounters are those where utilities sum to zero:

Competitive and Zero Sum Interactions Zero sum encounters are bad news: for me to get positive utility you have to get negative utility. The best outcome for me is the worst for you. Zero sum encounters in real life are very rare … but people tend to act in many scenarios as if they were zero sum

The Prisoner’s Dilemma Two men are collectively charged with a crime and held in separate cells, with no way of meeting or communicating. They are told that: if one confesses and the other does not, the confessor will be freed, and the other will be jailed for three years if both confess, then each will be jailed for two years Both prisoners know that if neither confesses, then they will each be jailed for one year

The Prisoner’s Dilemma cooperate defect Payoff matrix for prisoner’s dilemma: Bottom right: If both defect, then both get punishment for mutual defection Bottom left: If i cooperates and j defects, i gets sucker’s payoff of 1, while j gets 4 Top right: If j cooperates and i defects, j gets sucker’s payoff of 1, while i gets 4 Top left: Reward for mutual cooperation 3 4 cooperate 3 1 j 1 2 defect 4 2

What Should You Do? The individual rational action is defect This guarantees a payoff of no worse than 2, whereas cooperating guarantees a payoff of at most 1 So defection is the best response to all possible strategies: both agents defect, and get payoff = 2 But intuition says this is not the best outcome: Surely they should both cooperate and each get payoff of 3.

Solution Concepts D is a dominant strategy (D, D) is the only Nash equilibrium All outcomes except (D, D) are Pareto optimal (C, C) maximizes social welfare

The Prisoner’s Dilemma This apparent paradox is the fundamental problem of multi-agent interactions. It appears to imply that cooperation will not occur in societies of self-interested agents. Real world examples: nuclear arms reduction (“why don’t I keep mine. . . ”) free rider systems — public transport; in the UK and Israel — television licenses. The prisoner’s dilemma is ubiquitous. Can we recover cooperation?

Arguments for Recovering Cooperation Conclusions that some have drawn from this analysis: the game theory notion of rational action is wrong somehow the dilemma is being formulated wrongly Arguments to recover cooperation: We are not all Machiavelli The other prisoner is my twin Program equilibria and mediators The shadow of the future…

Program Equilibria The strategy you really want to play in the prisoner’s dilemma is: I’ll cooperate if he will Program equilibria provide one way of enabling this Each agent submits a program strategy to a mediator which jointly executes the strategies. Crucially, strategies can be conditioned on the strategies of the others

Program Equilibria Consider the following program: IF HisProgram == ThisProgram THEN DO(C); ELSE DO(D); END-IF Here == is textual comparison The best response to this program is to submit the same program, giving an outcome of (C, C) You can’t get the sucker’s payoff by submitting this program

The Iterated Prisoner’s Dilemma One answer: play the game more than once If you know you will be meeting your opponent again, then the incentive to defect appears to evaporate Cooperation is the rational choice in the infinititely repeated prisoner’s dilemma

Backwards Induction But…suppose you both know that you will play the game exactly n times On round n – 1, you have an incentive to defect, to gain that extra bit of payoff… But this makes round n – 2 the last “real”, and so you have an incentive to defect there, too. This is the backwards induction problem. Playing the prisoner’s dilemma with a fixed, finite, pre-determined, commonly known number of rounds, defection is the best strategy

Axelrod’s Tournament Suppose you play iterated prisoner’s dilemma against a range of opponents… What strategy should you choose, so as to maximize your overall payoff? Axelrod (1984) investigated this problem, with a computer tournament for programs playing the prisoner’s dilemma

Strategies in Axelrod’s Tournament ALLD: “Always defect” — the hawk strategy; TIT-FOR-TAT: On round u = 0, cooperate On round u > 0, do what your opponent did on round u – 1 TESTER: On 1st round, defect. If the opponent retaliated, then play TIT-FOR-TAT. Otherwise intersperse cooperation and defection. JOSS: As TIT-FOR-TAT, except periodically defect

Recipes for Success in Axelrod’s Tournament Axelrod suggests the following rules for succeeding in his tournament: Don’t be envious: Don’t play as if it were zero sum Be nice: Start by cooperating, and reciprocate cooperation Retaliate appropriately: Always punish defection immediately, but use “measured” force — don’t overdo it Don’t hold grudges: Always reciprocate cooperation immediately

Game of Chicken Consider another type of encounter — the game of chicken: (Think of James Dean in Rebel without a Cause: swerving = coop, driving straight = defect.) Difference to prisoner’s dilemma: Mutual defection is most feared outcome. (Whereas sucker’s payoff is most feared in prisoner’s dilemma.) Strategies (c,d) and (d,c) are in Nash equilibrium

Solution Concepts There is no dominant strategy (in our sense) Strategy pairs (C, D) and (D, C) are Nash equilibriums All outcomes except (D, D) are Pareto optimal All outcomes except (D, D) maximize social welfare

Other Symmetric 2 x 2 Games Given the 4 possible outcomes of (symmetric) cooperate/defect games, there are 24 possible orderings on outcomes CC >i CD >i DC >i DD Cooperation dominates DC >i DD >i CC >i CD Deadlock. You will always do best by defecting DC >i CC >i DD >i CD Prisoner’s dilemma DC >i CC >i CD >i DD Chicken CC >i DC >i DD >i CD Stag hunt (Assurance Game)

Stag Hunt (Assurance Game) “two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag.” stag hare 2, 2 0, 1 1, 0 1, 1 From http://en.wikipedia.org/wiki/Stag_hunt

Battle of the Sexes “Imagine a couple that agreed to meet this evening, but cannot recall if they will be attending the opera or a football match. The husband would most of all like to go to the football game. The wife would like to go to the opera. Both would prefer to go to the same place rather than different ones.” opera football 3, 2 0, 0 2, 3 Two pure strategy Nash Equilibria; mixed strategy NE with each player attending their desired event with probability 0.6 From http://en.wikipedia.org/wiki/Battle_of_the_sexes_(game_theory)