Download presentation
Presentation is loading. Please wait.
1
LECTURE 6: MULTIAGENT INTERACTIONS
An Introduction to MultiAgent Systems Chapter 11 in the Second Edition Chapter 6 in the First Edition
2
What are Multiagent Systems?
3
MultiAgent Systems Thus a multiagent system contains a number of agents… …which interact through communication… …are able to act in an environment… …have different “spheres of influence” (which may coincide)… …will be linked by other (organizational) relationships
4
Utilities and Preferences
Assume we have just two agents: Ag = {i, j} Agents are assumed to be self-interested: they have preferences over how the environment is Assume W = {w1, w2, …}is the set of “outcomes” that agents have preferences over We capture preferences by utility functions: Utility functions lead to preference orderings over outcomes:
5
What is Utility? Utility is not money (but it is a useful analogy)
Typical relationship between utility & money:
6
Risk Neutral A bet of $100-or-nothing, vs. $50
Risk neutral agent is indifferent between these options CE - Certainty equivalent; E(U(W)) - Expected value of the utility (expected utility) of the uncertain payment; E(W) - Expected value of the uncertain payment; U(CE) - Utility of the certainty equivalent; U(E(W)) - Utility of the expected value of the uncertain payment; U(W0) - Utility of the minimal payment; U(W1) - Utility of the maximal payment; W0 - Minimal payment; W1 - Maximal payment; RP - Risk premium From Wikipedia,
7
Risk Averse (risk avoiding)
A bet of $100-or-nothing, vs. (for example) $40 Risk averse agent prefers $40 sure thing CE - Certainty equivalent; E(U(W)) - Expected value of the utility (expected utility) of the uncertain payment; E(W) - Expected value of the uncertain payment; U(CE) - Utility of the certainty equivalent; U(E(W)) - Utility of the expected value of the uncertain payment; U(W0) - Utility of the minimal payment; U(W1) - Utility of the maximal payment; W0 - Minimal payment; W1 - Maximal payment; RP - Risk premium From Wikipedia,
8
Risk Affine (Risk Seeking)
A bet of $100-or-nothing, vs. (for example) $60 Risk seeking agent prefers the bet CE - Certainty equivalent; E(U(W)) - Expected value of the utility (expected utility) of the uncertain payment; E(W) - Expected value of the uncertain payment; U(CE) - Utility of the certainty equivalent; U(E(W)) - Utility of the expected value of the uncertain payment; U(W0) - Utility of the minimal payment; U(W1) - Utility of the maximal payment; W0 - Minimal payment; W1 - Maximal payment; RP - Risk premium From Wikipedia,
9
Human Approaches to “Utility”
Behavioral Economists have studied how human attitudes towards utility contradict classic economic notions of utility (and rationality, utility maximization) These issues are also important for agents who would need to interact with people in natural ways (or for multiagent systems that include people as some of the agents)
10
Kahneman (and Tversky)
Scenario 1: You go to a concert with $200 in your wallet, and a ticket to the concert that cost $100. When getting there, you discover that you lost a $100 bill from your wallet (so you have remaining $100, and the ticket). Do you go into the concert?
11
Kahneman (and Tversky)
Scenario 2: You go to a concert with $200 in your wallet, and a ticket to the concert that cost $100. When getting there, you discover that you lost the ticket from your wallet (so you have remaining $200, and no ticket). Do you buy another ticket and go into the concert?
12
Kahneman (and Tversky)
People remember the end of painful events more vividly Scenario 1: hand put into very cold (painfully cold) water for 2 minutes, then pulled out. Scenario 2: hand put into very cold (painfully cold) water for 2 minutes, then water is heated for 1 minute (lessening pain), and hand then pulled out. People prefer the second scenario, though the amount of pain is a superset of the first
13
Ariely Which subscription do you prefer?
Has The Economist’s sales department gone crazy? From
14
Ariely “…humans rarely choose things in absolute terms. We don’t have an internal value meter that tells us how much things are worth. Rather, we focus on the relative advantage of one thing over another, and estimate value accordingly.” From
15
Ariely Ask stranger to help unload sofa from a truck, for free; many agree Ask stranger to help unload sofa from a truck, for $1; most do not The second scenario seems to provide the utility of the first, plus more, though “obviously”, it doesn’t
16
Ultimatum Game Two players
Player 1 is given $100, told to offer Player 2 some of it. If Player 2 accepts, they divide the $100 according to the offer; if Player 2 does not accept, they both get nothing Player 1 offers Player 2 $1 Does Player 2 accept? Is it rational not to accept?
17
Multiagent Encounters
We need a model of the environment in which these agents will act… agents simultaneously choose an action to perform, and as a result of the actions they select, an outcome in W will result the actual outcome depends on the combination of actions assume each agent has just two possible actions that it can perform, C (“cooperate”) and D (“defect”) Environment behavior given by state transformer function:
18
Multiagent Encounters
Here is a state transformer function: (This environment is sensitive to actions of both agents.) Here is another: (Neither agent has any influence in this environment.) And here is another: (This environment is controlled by j.)
19
Rational Action Suppose we have the case where both agents can influence the outcome, and they have utility functions as follows: With a bit of abuse of notation: Then agent i’s preferences are: “C” is the rational choice for i. (Because i prefers all outcomes that arise through C over all outcomes that arise through D.)
20
An Aside on “Rationality”
The term “rational” is used imprecisely as a synonym of “reasonable” But “rational” is used (precisely) to mean “utility maximizer”, given the alternatives Since humans’ utility functions are difficult to elicit, one way of determining their functions is to see what people choose (under the assumption that they are rational) Then behavioral economists can show forms of behavior that seem “irrational”
21
“Rational” in Common Usage
The New York Times, April 26, 2012, “Defense Minister Adds to Israel’s Recent Mix of Messages on Iran”, by Jodi Rudoren “General Gantz described the Iranian government as ‘very rational.’ Mr. Netanyahu had told CNN on Tuesday that he would not count ‘on Iran’s rational behavior.’ ” “Mr. Barak said he thought it unlikely that the sanctions would succeed and that he did not see Iran as ‘rational in the Western sense of the word, meaning people seeking a status quo and the outlines of a solution to problems in a peaceful manner.’ ” “Dore Gold…said the apparent disagreement on rationality could be explained: ‘The Iranians have irrational goals, which they may try and advance in a rational way.’ ”
22
Payoff Matrices We can characterize the previous scenario in a payoff matrix: Agent i is the column player Agent j is the row player
23
Solution Concepts How will a rational agent will behave in any given scenario? Answered in solution concepts: dominant strategy Nash equilibrium strategy Pareto optimal strategies strategies that maximize social welfare
24
Dominant Strategies We will say that a strategy si is dominant for player i if no matter what strategy sj agent j chooses, i will do at least as well playing si as it would doing anything else Unfortunately, there isn’t always a unique undominated strategy
25
(Pure Strategy) Nash Equilibrium
In general, we will say that two strategies s1 and s2 are in Nash equilibrium if: under the assumption that agent i plays s1, agent j can do no better than play s2; and under the assumption that agent j plays s2, agent i can do no better than play s1. Neither agent has any incentive to deviate from a Nash equilibrium Unfortunately: Not every interaction scenario has a (pure strategy) Nash equilibrium Some interaction scenarios have more than one Nash equilibrium
26
The Assurance Game What are the (pure strategy) Nash equilibria?
D 4, 4 1, 2 2, 1 3, 3 CC and DD are both NEs; intuitively, CC seems like a more likely outcome, but NE doesn’t distinguish between them
27
Matching Pennies Players i and j simultaneously choose the face of a coin, either “heads” or “tails” If they show the same face, then i wins, while if they show different faces, then j wins The Matching Pennies Payoff Matrix:
28
Mixed Strategies for Matching Pennies
NO pair of strategies forms a pure strategy Nash Equilibrium (NE): whatever pair of strategies is chosen, somebody will wish they had done something else The solution is to allow mixed strategies: play “heads” with probability 0.5 play “tails” with probability 0.5 This is a NE strategy
29
Mixed Strategies A mixed strategy has the form
play α1 with probability p1 play α2 with probability p2 … play αk with probability pk such that p1 + p2 + ··· + pk =1
30
Nash’s Theorem Nash proved that every finite game has a Nash equilibrium in mixed strategies. (Unlike the case for pure strategies.) So this result overcomes the lack of solutions; but there still may be more than one Nash equilibrium…
31
The Assurance Game What are the (mixed strategy) Nash equilibria?
4, 4 1, 2 2, 1 3, 3 CC and DD are both NEs; so is “play C and D with equal probability”, which results in an expected payoff of 2.5 for both players. From
32
Rock-paper-scissors Normal form of game (zero-sum interaction) Rock
0, 0 -1, 1 1, -1 How should you play? Use a mixed strategy of equal probabilities for the 3 choices. From Wikipedia,
33
So why do these web sites (and books) exist?
Hint: your opponent is not an idealized rational agent, or even a computer.
34
Rock-paper-scissors-lizard-Spock
Expanded form of game: what’s the optimal strategy? From Wikipedia,
35
Unstable Equilibria C D 3, 3 4, 2 5, 1 This game has a unique mixed-strategy equilibrium point: row player plays (2/3 C, 1/3 D), and column player plays (1/3 C, 2/3 D) But if row player expects column player to play that strategy, the row player can deviate with no penalty (no incentive to deviate, but no penalty) From
36
Strong Nash Equilibrium
A Nash equilibrium where no coalition can cooperatively deviate in a way that benefits all members, assuming that non-member actions are fixed Defined in terms of all possible coalitional deviations, rather than all possible unilateral deviations
37
Sub-game Perfect Equilibrium
There are two equilibrium points, CC and DD Consider the extensive game (not identical to the normal form game, which is only here for the intuition…) where player I makes the first move, CC is a subgame perfect equilibrium, while DD is not “A subgame-perfect equilibrium is one that induces payoff-maximizing choices in every branch or subgame of its extensive form.” From
38
Pareto Optimality An outcome is said to be Pareto optimal (or Pareto efficient) if there is no other outcome that makes one agent better off without making another agent worse off If an outcome is Pareto optimal, then at least one agent will be reluctant to move away from it (because this agent will be worse off)
39
Pareto Optimality If an outcome ω is not Pareto optimal, then there is another outcome ω’ that makes everyone as happy, if not happier, than ω “Reasonable” agents would agree to move to ω’ in this case. (Even if I don’t directly benefit from ω’, you can benefit without me suffering.)
40
The Assurance Game What are the (mixed strategy) Nash equilibria?
4, 4 1, 2 2, 1 3, 3 CC and DD are both NEs; but DD is not Pareto Optimal
41
Social Welfare The social welfare of an outcome ω is the sum of the utilities that each agent gets from ω: Think of it as the “total amount of money in the system” As a solution concept, may be appropriate when the whole system (all agents) has a single owner (then overall benefit of the system is important, not individuals).
42
Competitive and Zero-Sum Interactions
Where preferences of agents are diametrically opposed we have strictly competitive scenarios Zero-sum encounters are those where utilities sum to zero:
43
Competitive and Zero Sum Interactions
Zero sum encounters are bad news: for me to get positive utility you have to get negative utility. The best outcome for me is the worst for you. Zero sum encounters in real life are very rare … but people tend to act in many scenarios as if they were zero sum
44
The Prisoner’s Dilemma
Two men are collectively charged with a crime and held in separate cells, with no way of meeting or communicating. They are told that: if one confesses and the other does not, the confessor will be freed, and the other will be jailed for three years if both confess, then each will be jailed for two years Both prisoners know that if neither confesses, then they will each be jailed for one year
45
The Prisoner’s Dilemma
cooperate defect Payoff matrix for prisoner’s dilemma: Bottom right: If both defect, then both get punishment for mutual defection Bottom left: If i cooperates and j defects, i gets sucker’s payoff of 1, while j gets 4 Top right: If j cooperates and i defects, j gets sucker’s payoff of 1, while i gets 4 Top left: Reward for mutual cooperation 3 4 cooperate 3 1 j 1 2 defect 4 2
46
What Should You Do? The individual rational action is defect This guarantees a payoff of no worse than 2, whereas cooperating guarantees a payoff of at most 1 So defection is the best response to all possible strategies: both agents defect, and get payoff = 2 But intuition says this is not the best outcome: Surely they should both cooperate and each get payoff of 3.
47
Solution Concepts D is a dominant strategy
(D, D) is the only Nash equilibrium All outcomes except (D, D) are Pareto optimal (C, C) maximizes social welfare
48
The Prisoner’s Dilemma
This apparent paradox is the fundamental problem of multi-agent interactions. It appears to imply that cooperation will not occur in societies of self-interested agents. Real world examples: nuclear arms reduction (“why don’t I keep mine. . . ”) free rider systems — public transport; in the UK and Israel — television licenses. The prisoner’s dilemma is ubiquitous. Can we recover cooperation?
49
Arguments for Recovering Cooperation
Conclusions that some have drawn from this analysis: the game theory notion of rational action is wrong somehow the dilemma is being formulated wrongly Arguments to recover cooperation: We are not all Machiavelli The other prisoner is my twin Program equilibria and mediators The shadow of the future…
50
Program Equilibria The strategy you really want to play in the prisoner’s dilemma is: I’ll cooperate if he will Program equilibria provide one way of enabling this Each agent submits a program strategy to a mediator which jointly executes the strategies. Crucially, strategies can be conditioned on the strategies of the others
51
Program Equilibria Consider the following program: IF HisProgram == ThisProgram THEN DO(C); ELSE DO(D); END-IF Here == is textual comparison The best response to this program is to submit the same program, giving an outcome of (C, C) You can’t get the sucker’s payoff by submitting this program
52
The Iterated Prisoner’s Dilemma
One answer: play the game more than once If you know you will be meeting your opponent again, then the incentive to defect appears to evaporate Cooperation is the rational choice in the infinititely repeated prisoner’s dilemma
53
Backwards Induction But…suppose you both know that you will play the game exactly n times On round n – 1, you have an incentive to defect, to gain that extra bit of payoff… But this makes round n – 2 the last “real”, and so you have an incentive to defect there, too. This is the backwards induction problem. Playing the prisoner’s dilemma with a fixed, finite, pre-determined, commonly known number of rounds, defection is the best strategy
54
Axelrod’s Tournament Suppose you play iterated prisoner’s dilemma against a range of opponents… What strategy should you choose, so as to maximize your overall payoff? Axelrod (1984) investigated this problem, with a computer tournament for programs playing the prisoner’s dilemma
55
Strategies in Axelrod’s Tournament
ALLD: “Always defect” — the hawk strategy; TIT-FOR-TAT: On round u = 0, cooperate On round u > 0, do what your opponent did on round u – 1 TESTER: On 1st round, defect. If the opponent retaliated, then play TIT-FOR-TAT. Otherwise intersperse cooperation and defection. JOSS: As TIT-FOR-TAT, except periodically defect
56
Recipes for Success in Axelrod’s Tournament
Axelrod suggests the following rules for succeeding in his tournament: Don’t be envious: Don’t play as if it were zero sum Be nice: Start by cooperating, and reciprocate cooperation Retaliate appropriately: Always punish defection immediately, but use “measured” force — don’t overdo it Don’t hold grudges: Always reciprocate cooperation immediately
57
Game of Chicken Consider another type of encounter — the game of chicken: (Think of James Dean in Rebel without a Cause: swerving = coop, driving straight = defect.) Difference to prisoner’s dilemma: Mutual defection is most feared outcome. (Whereas sucker’s payoff is most feared in prisoner’s dilemma.) Strategies (c,d) and (d,c) are in Nash equilibrium
58
Solution Concepts There is no dominant strategy (in our sense)
Strategy pairs (C, D) and (D, C) are Nash equilibriums All outcomes except (D, D) are Pareto optimal All outcomes except (D, D) maximize social welfare
59
Other Symmetric 2 x 2 Games
Given the 4 possible outcomes of (symmetric) cooperate/defect games, there are 24 possible orderings on outcomes CC >i CD >i DC >i DD Cooperation dominates DC >i DD >i CC >i CD Deadlock. You will always do best by defecting DC >i CC >i DD >i CD Prisoner’s dilemma DC >i CC >i CD >i DD Chicken CC >i DC >i DD >i CD Stag hunt (Assurance Game)
60
Stag Hunt (Assurance Game)
“two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag.” stag hare 2, 2 0, 1 1, 0 1, 1 From
61
Battle of the Sexes “Imagine a couple that agreed to meet this evening, but cannot recall if they will be attending the opera or a football match. The husband would most of all like to go to the football game. The wife would like to go to the opera. Both would prefer to go to the same place rather than different ones.” opera football 3, 2 0, 0 2, 3 Two pure strategy Nash Equilibria; mixed strategy NE with each player attending their desired event with probability 0.6 From
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.