Download presentation
Presentation is loading. Please wait.
Published byHilary Clarke Modified over 9 years ago
1
If two heads are better than one, how about 2000? Vicki Allan Multi-Agent Systems
2
Goal of Presentation Give Overview of Multi Agent System issues Give specific research ideas Welcome you to join me in research –interesting projects –supportive major professor –hands on –travel opportunities –independent study opportunities next semester (CS 5600 a prerequisite)
3
Strategic Form Games Competition
4
Prisoners’ dilemma -10, -100, -30 -30, 0-1, -1 Confess Don’t Confess Ned Kelly
5
Prisoners’ dilemma -10, -100, -30 -30, 0-1, -1 Confess Don’t Confess Ned Kelly Note that no matter what Ned does, Kelly is better off if she confesses than if she does not confess. So ‘confess’ is a dominant strategy from Kelly’s perspective. We can predict that she will always confess.
6
Prisoners’ dilemma -10, -100, -30 Confess Don’t Confess Ned Kelly The same holds for Ned.
7
Prisoners’ dilemma -10, -10 Confess Don’t Confess Ned Kelly So the only outcome that involves each player choosing their dominant strategies is where they both confess. Solve by iterative elimination of dominant strategies
8
What is bothersome?
9
Example: Prisoner’s Dilemma Two people are arrested for a crime. If neither suspect confesses, both get light sentence. If both confess, then they get sent to jail. If one confesses and the other does not, then the confessor gets no jail time and the other gets a heavy sentence. (Actual numbers vary in different versions of the problem, but relative values are the same) -10,-10 0,-30 -30,0-1,-1 Confess Don’t Confess Dom. Str. Eq not pareto optimal Optimal Outcome Don’t Confess Pareto optimal
10
Game of Chicken Consider another type of encounter — the game of chicken: (Think of James Dean in Rebel without a Cause) Difference to prisoner’s dilemma: Mutually going straight is most feared outcome. (Whereas sucker’s payoff is most feared in prisoner’s dilemma.) straightswerve straight-10, -1010, 5 swerve5, 107, 7 Kelly Ned
11
Game of Chicken Is there a dominant strategy? Is there a pareto optimal (can’t do better without making someone worse)? Is there a “Nash” equilibrium – knowing what my opponent is going to do, would I be happy with my decision? straightswerve straight-10, -1010, 5 swerve5, 107, 7
12
Monetary Auction Object for sale: a bill Rules –Highest bidder gets it –Highest bidder and the second highest bidder pay their bids –New bids must beat old bids by 5¢. –Bidding starts at 5¢. –What would your strategy be?
13
Give Away Have bag of candy to give away If everyone in the class says “share”, the candy is split equally. If only one person says “I want it”, he/she gets the candy to himself. If more than one person says “I want it”, I keep the candy.
14
Cooperation Hiring a new professor this year. Committee of three people to make decision Have narrowed it down to four. Each person has a different ranking for the candidates. How do we make a decision?
15
Binary Protocol One voter ranks c > d > b > a One voter ranks a > c > d > b One voter ranks b > a > c > d winner (c, (winner (a, winner(b,d)))=a winner (d, (winner (b, winner(c,a)))=d winner (d, (winner (c, winner(a,b)))=c winner (b, (winner (d, winner(c,a)))=b surprisingly, order of pairing yields different winner!
16
Borda protocol assigns an alternative |O| points for the highest preference, |O|-1 points for the second, and so on The counts are summed across the voters and the alternative with the highest count becomes the social choice 16
17
reasonable???
18
Borda Paradox a > b > c >d b > c > d >a c > d > a > b a > b > c > d b > c > d> a c >d > a >b a <b <c < d a=18, b=19, c=20, d=13 Is this a good way? Clear loser
19
Borda Paradox – remove loser (d), winner changes a > b > c >d b > c > d >a c > d > a > b a > b > c > d b > c > d> a c >d > a >b a <b <c < d a=18, b=19, c=20, d=13 n a > b > c n b > c >a n c > a > b n a > b > c n b > c > a n c > a >b n a <b <c a=15,b=14, c=13 When loser is removed, second worst becomes winner!
20
Open Problems in Multi-Agent Systems
21
Group Detection GOAL: Organize large groups of agents into coalitions or subgroups. Can have similar interests or complementary skills. Application – distributed formation of peer- to-peer information retrieval systems. Agents use explicit information about other agents’ interests and qualifications
22
On Line Auctions How to make online auctions fair
23
Entity Resolution Identifying sets of nodes within a graph that actually refer to the same object. Trust, reputation, authentication. Repeat offenders – low levels of trust who leave a community and then rejoin with a new identity Agent pose as multiple agents – eBay, post positive feedback, bidding up their own items Possible approach – data base
24
Link Prediction Inferring the existence of a link (relationship) in a graph that was not previously known Agent Organized Networks – discover or create new links within a larger community Discover relationships between other agents – collusion detection
25
Graph Classification Judge Organization – efficient or non- efficient Uses: determine when to join an open networked multi-agent system such as a supply chain network.
26
Generative Models for Graphs Understand effects of real-world networked structures and compare to agent organized networks. How and why agent networks evolve.
27
Who Works Together in Agent Coalition Formation? Vicki Allan – Utah State University Kevin Westwood – Utah State University Presented September 2007, Netherlands CIA 2007
28
Overview Tasks: Various skills and numbers Agents form coalitions Agent types - Differing policies How do policies interact?
29
Multi-Agent Coalitions “A coalition is a set of agents that work together to achieve a mutually beneficial goal” (Klusch and Shehory, 1996) Reasons agent would join Coalition –Cannot complete task alone –Complete task more quickly
30
Skilled Request For Proposal (SRFP) Environment Inspired by RFP (Kraus, Shehory, and Taase 2003) Provide set of tasks T = {T 1 …T i …T n } –Divided into multiple subtasks –In our model, task requires skill/level –Has a payment value V(T i ) Service Agents, A = {A 1 …A k …A p } –Associated cost f k of providing service –In the original model, ability do a task is determined probabilistically – no two agents alike. –In our model, skill/level –Higher skill is more flexible (can do any task with lower level skill)
31
Why this model? Enough realism to be interesting –An agent with specific skills has realistic properties. –More skilled can work on more tasks, (more expensive) is also realistic Not too much realism to harm analysis –Can’t work on several tasks at once –Can’t alter its cost
32
Auctioning Protocol Variation of a reverse auction –Agents compete for opportunity to perform services –Efficient way of matching goods to services Central Manager (ease of programming) 1)Randomly orders Agents 2)Each agent gets a turn Proposes or Accepts previous offer 3)Coalitions are awarded task Multiple Rounds {0,…,r z }
33
Agent Costs by Level General upward trend
34
Agent cost Base cost derived from skill and skill level Agent costs deviate from base cost Agent payment cost + proportional portion of net gain Only Change in coalition
35
How do I decide what to propose?
36
Decisions If I make an offer… What task should I propose doing? What other agents should I recruit? If others have made me an offer… How do I decide whether to accept?
37
Coalition Calculation Algorithms Calculating all possible coalitions –Requires exponential time –Not feasible in most problems in which tasks/agents are entering/leaving the system Divide into two steps 1) Task Selection 2) Other Agents Selected for Team –polynomial time algorithms
38
Task Selection- 4 Agent Types 1.Individual Profit – obvious, greedy approach Competitive: best for me Why not always be greedy? Others may not accept – your membership is questioned Individual profit may not be your goal 2.Global Profit 3.Best Fit 4.Co-opetitive
39
Task Selection- 4 Agent Types 1.Individual Profit 2.Global Profit – somebody should do this task I’ll sacrifice Wouldn’t this always be a noble thing to do? Task might be better done by others I might be more profitable elsewhere 3.Best Fit – uses my skills wisely 4.Co-opetitive
40
Task Selection- 4 Agent Types 1.Individual Profit 2.Global Profit 3.Best Fit – Cooperative: uses skills wisely Perhaps no one else can do it Maybe it shouldn’t be done 4.Co-opetitive
41
4 th type: Co-opetitive Agent Co-opetition –Phrase coined by business professors Brandenburger and Nalebuff (1996), to emphasize the need to consider both competitive and cooperative strategies. Co-opetitive Task Selection –Select the best fit task if profit is within P% of the maximum profit available
42
What about accepting offers? Melting – same deal gone later Compare to what you could achieve with a proposal Compare best proposal with best offer Use utility based on agent type
43
Some amount of compromise is necessary… We term the fraction of the total possible you demand – the compromising ratio
44
Resources Shrink Even in a task rich environment the number of tasks an agent has to choose from shrinks –Tasks get taken Number of agents shrinks as others are assigned
45
My tasks parallel total tasks Task Rich: 2 tasks for every agent
46
Scenario 1 – Bargain Buy Store “Bargain Buy” advertises a great price 300 people show up 5 in stock Everyone sees the advertised price, but it just isn’t possible for all to achieve it
47
Scenario 2 – selecting a spouse Bob knows all the characteristics of the perfect wife Bob seeks out such a wife Why would the perfect woman want Bob?
48
Scenario 3 – hiring a new PhD Universities ranked 1,2,3 Students ranked a,b,c Dilemma for second tier university offer to “a” student likely rejected delay for acceptance “b” students are gone
49
Affect of Compromising Ratio equal distribution of each agent type Vary compromising ratio of only one type (local profit agent) Shows profit ratio = profit achieved/ideal profit (given best possible task and partners)
50
Achieved/theoretical best Note how profit is affect by load
51
Profit only of scheduled agents Only Local Profit agents change compromising ratio Yet others slightly increase too
52
Note Demanding local profit agents reject the proposals of others. They are blind about whether they belong in a coalition. They are NOT blind to attributes of others. Proposals are fairly good
53
For every agent type, the most likely proposer was a Local Profit agent.
54
No reciprocity: Coopetitive eager to accept Local Profit proposals, but Local Profit agent doesn’t accept Coopetitive proposals especially well
55
For every agent type, Best Fit is a strong acceptor. Perhaps because it isn’t accepted well as a proposer
56
Coopetitive agents function better as proposers to Local Profit agents in balanced or task rich environment. –When they have more choices, they tend to propose coalitions local profit agents like –More tasks give a Coopetitive agent a better sense of its own profit-potential Load balance seems to affect roles Coopetitive Agents look at fit as long as it isn’t too bad compared to profit.
57
Agent rich: 3 agents/task Coopetitive accepts most proposals from agents like itself in agent rich environments
58
Do agents generally want to work with agents of the same type? –Would seem logical as agents of the same type value the same things – utility functions are similar. –Coopetitive and Best Fit agents’ proposal success is stable with increasing percentages of their own type and negatively correlated to increasing percentages of agents of other types.
59
Look at function with increasing numbers of one other type.
60
What happens as we change relative percents of each agent? Interesting correlation with profit ratio. Some agents do better and better as their dominance increases. Others do worse.
61
shows relationship if all equal percent Best fit does better and better as more dominant in set Local Profit does better when it isn’t dominant
62
So who joins and who proposes? Agents with a wider range of acceptable coalitions make better joiners. Fussier agents make better proposers. However, the joiner/proposer roles are affected by the ratio of agents to work.
63
Conclusions Some agent types are very good in selecting between many tasks, but not as impressive when there are only a few choices. In any environment, choices diminish rapidly over time. Agents naturally fall into role of proposer or joiner.
64
Future Work Lots of experiments are possible All agents are similar in what they value. What would happen if agents deliberately proposed bad coalitions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.